What is Artificial Intelligence?

Everyone is talking about AI this and AI that, but what are artificial intelligences? Are they the Neuromancer and Wintermute of Cyberpunk1 or precursors to the ships minds of the Culture?2

No. Generative AI refers to deep-learning large language models (LLM) that can generate text, images, and other content based on the data they were trained on. MIT has a more detailed explanation..

And it is worth remembering that the “I” in LLM stands for Intelligence. LLMs are not aware of context and cannot reason. All they do is generate content based on the data they were trained on.

For simplicity, when we talk about AI herein, we talk about generative AI, or LLM models. The terms are interchangeable.

Policy

Do you let your staff use AI? If so, do you have an AI policy or are they free to do as they wish?

Trump’s acting cyber chief uploaded sensitive files into a public version of ChatGPT and Gartner Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027 The risk of your confidential data being leaked, not even maliciously, are as simple as a copy-and-paste.

An AI policy, like the template one provided here offer you a good starting point for your staff to follow. It should outlines which tools are allowed, what data is shared, and how it is used. Training of your staff is essential here.

While meant to be funny, you could do worst than using these tips for using AI

Hallucination

Are you, and your staff aware of AI hallucinations? Can you explain what they are?

The term draw from psychology where a patient see or hears something that is not there. It is a response generated by AI that contains false or misleading information presented as fact. The rate is hard to measure since there are many factors that go into it. However, it is generally assumed that any AI will hallucinate around 15% of its output. Therefore, it is essential to never trust the output of an AI without reviewing it yourself.

Why do AI hallucinate? offers a good review of the topic from OpenAI.

Security

Did you ever had to secure an AI system? If so, how did you do it?

Why AI systems may never be secure, and what to do about it and How to stop AI’s “lethal trifecta” from The Economist talk about some of the problems that are facing AI systems. To answer those, both the UK’s Cyber security risks to artificial intelligence and the NIST AI Risk Management Framrwork were created.

One thing is certain, unless you fully control the model, you cannot be secure. Relying on a subscription for it might offer some degree of protection, but you will be at the mercy of the providers’ security. Therefore, make sure you ask them for their SOC-2 or ISO 27,001 reports. If they don’t have those, maybe the risk is too high to use them?

How much do you trust your vendors?

Adoption

Did you have the conversation of adding AI to your product, or using AI to help you build your product?

Of course, you have. Everyone is doing it from Microsoft to DuoLingo, everyone is adding AI everywhere regardless of whether people want it or if it makes sense. Billions are spend on data centres, RAM, and GPU to power those, but where is the demand?

The search engine DuckDuckGo asked users how they feel about AI search and 90% said they did not want it. A recent CEO survey states than half of the 4,454 CEO respondents said “their companies aren’t yet seeing a financial return from investments in AI.”

Taking a step back might be a good idea where. Look at where it makes sense to use AI and where it makes sense to have AI in your product. Just adding it so you have it is not going to increase your revenue.

AI Generated Code and Vibe Coding

AI code

Did you ever wondering if a programmer could be replaced by AI? Do you know Vibe Coding? Have you done it yourself? Were you impress or did the AI destroy your production database?

Yes, this can happen. ‘I destroyed months of your work in seconds’ says AI coding tool after deleting a dev’s entire database during a code freeze: ‘I panicked instead of thinking’

Last year, we had our post on the good, the bad, and the ugly of AI code which is still very much relevant today. However, we now have more data. We are starting to see some results from vibe coding. For example, the state of AI vs human code generation report gives us some data:

  • 1.7 times more issues in AI generated code
  • 1.3 to 1.7 more critical and major findings
  • 75% higher prevalence of logic and correctness issues

In addition, the report highlights the following:

Internal dashboards show more2 late-stage defects , SRE teams report more operational incidents tied to logic and configuration errors, and several high-profile postmortems in 2025 have pointed to AI-authored or AI-assisted changes as contributing factors.

Do you think an AI can use SOLID or KISS, or other best practices?

Remember, the I in LLM stands for Intelligence. The LLM has no context, no reasoning, no way to apply any of the above best practices to the code they create. Even worst, small changes in the prompt can create varied results, some more maintainable than others. It is fire and forget code. Which can be very valuable or not, depending on your use case.

In DORA 2024 the data showed that 75% of developers reported higher productivity with AI. However, delivery was down by 2% and stability was down by 7% — Not a good outcome! DORA 2025 (see below) states that the best results of AI adoption in software development is that of amplifier. The whole report is really worth a read — grab it and a coffee, you won’t regret it!

All this points to a need to have a good understanding of the factors that affect the development processes and how AI can enhances those. It is not a plug-and-play interface just yet.

Conclusion

All of these questions, and more, are under our foundation of AI.

Imagine, if you will, a day when all the drama is removed from your software production: no panic, no crisis, just smooth software releases that exceed your customer’s2 expectations. This is what we have done in the past and can do for you.

The only way to proceed is to have a company-wide AI policy, pick the right AI tools to use, and keep to the best security practices for secure code development. AI is a powerful tool, but the benefits of its use may be outweighed by the pitfalls of its misuse. The way your business engages with AI is a critical factor, whether your technology team becomes advocates for AI software code generation or develops an adversarial relationship to it. With our understanding of the abstract factors, we can create the right approach for you.



This is part of a series on all our foundations. Here are links to the next entries:

People
Development
Security
Operations
AI

  1. Neuromancer is a book by William Gibson, who wrote the first novel in the Cyberpunk genre. 

  2. The Culture setting by Ian M Banks is where most of his SciFi books are set and include giant spaceship with super intelligences controlling them. 

Updated: