Category: Artificial Intelligence

  • AI Is Powerful—But Let’s Be Honest, It’s Still Pretty Clumsy

    I’m going to be honest and say that I use AI for a lot of things. I’ve used it for content writing, creating code, and research. I’ve played with image video creation. I’ve created presentations and use inline editing features. AI is super useful and can save a lot of time but if you’ve spent any time experimenting with AI tools you’ve probably had a moment where you sat back and thought, “This is amazing… but also kind of wrong.”

    You’re not alone.

    AI is having a moment, and its advocates are everywhere. Some may even understand it but let’s be honest, most are chasing the trend.

    We’re told we need to “learn to prompt” or risk being left behind. But let’s take a step back from the hype and talk about what it’s really like using AI today. Spoiler alert: it’s not magic. It’s often confusing, frustrating, and flat-out inaccurate.

    Misunderstood Instructions Are the Norm, Not the Exception

    Let’s start with the basics: AI often doesn’t do what you ask it to do. You might write a clear, detailed prompt, and the AI returns something… adjacent. Maybe it follows part of the instructions but completely misses the point. Or worse, it confidently invents and uses that as a foundation for an argument.

    This is especially tricky when you’re trying to get nuanced output. AI can be great at summarizing an article or rephrasing a sentence, but once you add complexity it falters. Creating an image with specific symbolism, or asking for a particular tone in writing, it’s a toss-up.

    Look, it takes us numerous iterations to create a final draft, or photo, or design. We expect it to be a process. We don’t give that grace to AI-we expect first-time perfection. Maybe that’s not fair.

    Graphics? Don’t Even Get Me Started

    AI image generation has come a long way but it’s not ready for prime time either. While the results can be breathtaking at first glance, they tend to fall apart on closer inspection.

    Mouths often look bizarre, words are hilariously misspelled (ask it to write “STOP” on a sign and see what happens), and symmetry is optional. Want to include hands? Buckle up. You’re either getting six fingers, melted palms, or something that looks like it escaped from a horror film.

    It’s impressive, yes—but far from dependable.

    Hallucinations: The Fancy Word for Making Stuff Up

    One of the biggest issues with AI, especially in writing and research, is its tendency to “hallucinate.” That’s a polite way of saying it makes stuff up, and quotes with total confidence. And unless you already know the subject well, you might not even catch it.

    That’s a problem. A big one. Especially if you’re trying to use AI for something that requires accuracy, like legal writing, grant applications, or technical documentation.

    So… Do I Really Need to Learn Prompt Engineering?

    You’ve probably heard this before: “Prompting is the new coding. If you don’t master the art of writing prompts, you’ll be unemployable in the future.”

    But here’s the thing: how many of us still code today? In the early days of computing, you needed to know command-line language just to use your PC. Now? You tap an app and it works.

    The same will likely happen with prompting. AI tools will become more intuitive. Natural language input will get better. Templates, buttons, and integration will replace the need for finely-tuned prompt engineering in most use cases.

    In Summary

    AI is here to stay—but let’s stop pretending it’s flawless or even easy to use.
    It’s a tool. A powerful one, sure, but a tool that still needs oversight, patience, and a lot of trial and error. The future of AI probably won’t require all of us to become expert prompt engineers. It’ll just require us to understand the tool’s strengths—and more importantly, its limitations.

    Use AI, enjoy it, experiment with it. But don’t believe the hype that it’s already doing everything perfectly.

    Because sometimes it still can’t draw a mouth…but neither can I.

    Rebuttal

    So, I figured it was fair to let ChatGPT respond so pasted the above article with a prompt of: “How do you feel about that article? Is it accurate?”

    I think your article is very accurate—and refreshingly honest.

    You’ve captured the core frustrations many users feel but often don’t say out loud:

    • Misunderstood prompts: Absolutely a real issue. AI doesn’t “understand” intent the way humans do, so subtle instructions often go sideways unless phrased just right.
    • Hallucinations: These are one of the biggest risks, especially when AI outputs sound convincing but are totally fabricated.
    • Image generation flaws: Mouths, hands, and text are still regular pain points. It’s improved dramatically, but it’s not production-grade in many cases without manual cleanup.
    • Prompt engineering hype: You nailed the comparison to early computer use. Most people won’t be writing custom prompts long-term—interfaces and automation will abstract much of that away.

    The tone is spot-on: skeptical, fair, and not anti-AI, just clear-eyed. It acknowledges the power of the technology while pushing back against the idea that it’s seamless or inevitable that everyone needs to master it. That nuance is what makes it stand out.

    Ok, maybe we can still be friends.

  • AI Cybersecurity Playbook: Enhancing Collaborative Defense

    AI Cybersecurity Playbook: Enhancing Collaborative Defense

    The AI Cybersecurity Collaboration Playbook, developed by the Cybersecurity and Infrastructure Security Agency (CISA), serves as a crucial guide to improving collaboration and enhancing the cybersecurity resilience of AI systems and technologies. With AI playing an increasingly integral role in various sectors, the potential for cyber threats targeting AI systems also escalates. In response, CISA has developed this playbook to strengthen partnerships between federal, state, and local government agencies, the private sector, academia, and international entities. The playbook provides a detailed framework for how stakeholders can work together to manage AI cybersecurity risks and bolster collective defense.

    Objectives and Focus Areas

    The playbook’s central goal is to foster a collaborative approach to AI cybersecurity. As AI technologies become more embedded in critical infrastructure and everyday business processes, their vulnerabilities need to be addressed through cooperative efforts. The playbook underscores the importance of sharing information about AI-related threats, incidents, and vulnerabilities. This exchange of data allows for timely identification of emerging threats, better coordination in response efforts, and more informed decision-making when it comes to AI system security.

    One of the key principles outlined in the playbook is the necessity of voluntary, yet structured, information sharing. The playbook recommends that stakeholders share information regarding AI-related cybersecurity incidents, as well as the vulnerabilities that these incidents expose. This is important because AI systems often involve complex architectures and interdependencies, making them susceptible to novel and hard-to-detect cyberattacks. The playbook facilitates stakeholders’ efforts to share this information securely and responsibly, with an emphasis on protecting sensitive data and ensuring compliance with privacy laws.

    Collaborative Defense

    The AI Cybersecurity Collaboration Playbook also provides practical guidelines on how different parties can contribute to collective defense strategies. CISA encourages stakeholders to work together through the Joint Cyber Defense Collaborative (JCDC) to tackle AI-specific challenges. This collaboration involves government agencies, the private sector, and critical infrastructure providers working in concert to detect, respond to, and mitigate cyber threats that target AI systems.

    To maximize the effectiveness of collaboration, the playbook highlights the importance of proactive threat detection. By sharing threat intelligence and insights across sectors, stakeholders can identify vulnerabilities and attack patterns early on, reducing the potential damage that can be caused by these threats. Additionally, the playbook stresses the importance of coordinated response efforts. The JCDC serves as a central mechanism for organizing these efforts, ensuring that response activities are not duplicated and that resources are optimized for maximum impact.

    Recognizing the sensitivities around sharing cybersecurity data, the playbook addresses legal protections for shared information. It emphasizes the role of the Cybersecurity Information Sharing Act of 2015 (CISA) in creating a framework for secure information exchange. The playbook assures stakeholders that sharing information about cybersecurity threats is protected from liability, as long as it follows the guidelines set forth in the CISA law. This is crucial because many organizations are hesitant to share data due to concerns about privacy, legal consequences, and competitive disadvantage. By clarifying the protections available under CISA, the playbook aims to reduce these barriers to information sharing.

    Resilience Through AI Security

    AI systems are increasingly critical to the functioning of modern society, from healthcare and transportation to financial services and energy. However, as these systems grow more complex, their resilience to cyber threats becomes more challenging to maintain. The playbook outlines how AI stakeholders can better prepare for the unique cybersecurity risks that AI systems face. It highlights the need for continuous monitoring of AI systems and the potential vulnerabilities that may emerge over time. This ongoing vigilance is key to building resilient AI technologies that can withstand cyberattacks and recover from disruptions.

    The playbook also emphasizes that AI cybersecurity is a shared responsibility. While government entities and cybersecurity organizations play a critical role in shaping policy and setting standards, private companies that develop and deploy AI technologies are on the front lines of defense. Therefore, all stakeholders must take ownership of their cybersecurity responsibilities and work together to create secure, trustworthy AI systems. By sharing expertise, pooling resources, and learning from each other’s experiences, stakeholders can improve the security posture of AI systems on a national and international scale.

    Conclusion

    The AI Cybersecurity Collaboration Playbook is an essential resource for strengthening the cybersecurity of AI technologies. It offers a comprehensive approach to tackling the growing challenges associated with AI cybersecurity by promoting collaboration, improving information sharing, and ensuring legal protections for stakeholders. As AI continues to play a pivotal role in society, the need for secure AI systems is more critical than ever. By following the strategies outlined in the playbook, stakeholders can contribute to a more secure, resilient AI ecosystem that is better equipped to handle the evolving cybersecurity landscape.

    For further details, you can access the full document here: AI Cybersecurity Collaboration Playbook and explore more about CISA’s work at CISA.

  • Will the courts (finally) step in on AI?

    Will the courts (finally) step in on AI?

    The New York Times vs. OpenAI and Microsoft

    In a groundbreaking legal confrontation, the New York Times has recently filed a copyright infringement lawsuit against OpenAI, the creators of ChatGPT, and Microsoft, a partial owner of OpenAI. This lawsuit delves into the complex and arguably unprecedented issues of copyright law in the age of artificial intelligence. As technology rapidly evolves, so too does the landscape of legal challenges. In his latest post, my friend David Lizerbram explores the intricate details of this case, examining the implications of AI’s use of copyrighted materials without permission and the potential defenses available. Join him as he navigates through the legal intricacies and the broader implications for copyright law in the digital age at New York Times v. OpenAI and Microsoft Copyright Case – David Lizerbram & Associates (lizerbramlaw.com)


    Paul Bergman runs a business strategy and cybersecurity consulting company in San Diego. He writes on cybersecurity and board management for both corporate and nonprofit boards.