Generative AI

Figure 1 – AI evolution (Synoptek, 2023) |
In recent years, we have witnessed remarkable advances in the field of Artificial Intelligence (AI), driven by techniques such as Machine Learning (ML), Deep Learning (DL), and Generative Artificial Intelligence (GenAI).
ML is a subfield of AI that has revolutionized how information systems interact with and learn from data. By moving away from explicit programming in favor of automatically extracting patterns and relevant features from data, ML has enabled the creation of intelligent systems capable of analyzing vast volumes of information and developing deterministic models for information categorization and classification. Some practical applications of ML include systems for weather phenomenon forecasting, financial risk assessment in banking, text categorization, and a wide range of highly adaptable recommendation systems.
The emergence of DL addressed some limitations of the preceding discipline, particularly the effective capability to perform more complex tasks involving multidimensional information patterns. DL introduced deep neural networks with multiple layers, enabling more autonomous and effective learning from complex data structures, leading to unprecedented advances in areas such as computer vision, natural language processing, and speech recognition.
Generative Artificial Intelligence represents the most advanced and recent discipline within AI, aimed at creating models capable of autonomously generating new, realistic content. Through techniques such as Generative Adversarial Networks (GANs), AI introduces a new level of disruptive change across various sectors of society. Current applications include generating highly realistic images, videos, and music, almost indistinguishable from human-made content. The power of GenAI is a differentiating factor for organizational agents. GenAI has launched society into a new revolution; whether we choose to take part or not, it will push organizations to adapt and seek these clearly differentiating solutions to pursue their goals.
New Threatscape
With the increased availability of GenAI on the market and its rapid expansion and demand, a new horizon of threats and risks is emerging. According to the latest forecast report on major threats for 2030 (ENISA, 2023), AI is a focal point, with interdependencies on other threats such as supply chain compromises and software dependencies, increased digital surveillance and loss of privacy, disinformation campaigns, and the abusive use of artificial intelligence.
Frameworks and Legal norms
In regard to AI, Europe has been making efforts to improve the security and regulatory compliance in the use and development of this technology. We can list some key milestones that led the European Union (EU) to the recent European AI Act (Act, 2024):
- General Data Protection Regulation (GDPR) (EU, 2018);
- Ethical Guidelines for Trustworthy AI (European Commission, 2019);
- European AI Act.
A similar effort is being observed worldwide:
- Singapore’s AI Governance Model (PDPC, 2020);
- Canada’s Artificial Intelligence and Data Act (Canada, 2022);
- The U.S. Blueprint for an AI Bill of Rights (U.S., 2022);
- U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (U.S., 2023);
- United Kingdom’s Artificial Intelligence Regulation Bill (U.K., 2023).

Figure 2 –Global motivation trend towards creating of AI legals and frameworks. |
EU AI Act
The regulation proposed in April 2021 includes a set of rules for users, developers, and providers of AI-based services and technologies. It details the obligations that each entity must observe in the use and provision of this type of technology within the EU.
This European Act defines a risk-based approach, where the obligations for an AI system are proportional to the level of risk it presents, based on its design, architecture, and intended use:
- Low Risk: includes systems that have minimal or no impact on individual rights, safety, or interests. These systems are subject to few transparency obligations. Examples include digital games and spam filters.
- Limited Risk: includes systems that present a potential risk to individual rights, safety, or interests. These systems are also subject to transparency obligations and may require a conformity assessment before being marketed. Examples include chatbots.
- High Risk: includes systems that significantly impact individual rights, safety, or interests. These systems are used in critical infrastructure, transportation, healthcare, law enforcement, and customs control. At this level, AI solutions are subject to strict transparency and conformity assessment criteria, along with specific requirements related to data quality, fundamental rights, human oversight, and cybersecurity. Examples include applications in transportation, education, human resources, the financial sector, border control, fairness and compliance systems, and biometric systems.
- Unacceptable Risk: these are systems prohibited by law. Examples include those used for “social scoring and ranking” or manipulating individuals without their knowledge or prior consent. Social scoring by public authorities or real-time remote biometric surveillance in public spaces by law enforcement is forbidden (except in certain specific cases). Examples include social scoring, and public-facing remote biometric systems for compliance and law enforcement purposes.
The impact of the European AI Act within the European Union (EU) is evident. For compliance and applicability purposes, the European AI Act has official entered in force on august 2024, with full implementation expected over a phased 2-year period:
- After 6 months: implementation of the scenarios designated as prohibitive.
- After 12 months: governance and obligations for general-purpose AI solutions.
- After 36 months: full implementation of all regulations for AI-based systems.
To assist organizations in preparing for the transition and implementation of the European AI Act, the European Commission has developed two initiatives:
- AI Pact (European AI Pact, 2024);
- European Act Conformity Test (Act E. A., 2024).
New Threatscape for 2025
Due to the complexity involved in creating, training, and implementing the latest generative AI models, along with the extended supply chain required for developing and training these models, we can clearly infer that this will be the most vulnerable technology within production chains.
Additionally, over the past two years, we have seen a small group of companies emerge as market leaders in high-performance, scalable generative AI solutions that are ready for rapid integration into market demands. For instance, OpenAI, through its recent exclusive partnership with Microsoft, currently leads the market with a substantial advantage over competitors such as Meta, Google, Apple, and other entities in China, Europe, and xAI.
The expertise required to develop solutions with capabilities and performance comparable to OpenAI’s is not open source. Consequently, it is often beyond reach for companies, organizations, and clients to fully develop their own generative AI solutions independently. This reality has led to a clear and evident race for this technology, as the successful integration of AI and generative models within an organization is a distinguishing factor for competitive advantage.

Figure 3 –OpenAI evolution. |
Thus, the conditions are aligning for a new horizon of risks and threats associated with the use of AI in the application market. The supply chain for generating and sustaining an AI-based model is extensive and dispersed. A further differentiating factor to observe in the application security cycle is the integration of Machine Learning Ops (ML Ops). In this cycle, in addition to the traditional “code” that makes up the AI solution, there is the component of “information and data” used for training the AI model.
Given the current market, the involvement of various supply sources and decentralized contributions in the creation of an AI solution is inevitable. Combined with the knowledge gap for constructing these solutions, we are likely to see a trend in which users, clients, and distributors of these solutions rely on “trust” alone to certify application security, leading to the “blind” acceptance of new risk and threat scenarios. This is unacceptable.
Risks and Threats
We consider the following as the most significant risks and threats:
- From the User’s Perspective:
- Creation of Harmful Content
- Deepfakes
- Data Privacy and Sensitive Information Leaks
- Copyright Infringement
- Accuracy and Bias Issues
- Ethical and Social Concerns
- From the Application Security Perspective:
- Attacks via Adversarial Machine Learning (AML)
- Generative AI, Large Language Models (LLMs)
- Supply Chain Attacks
- New Threat Actors and Attack Vectors
- Transparency, Open Source, and Accountability
Threat Modeling and Offensive Security intelligent adaptation
Given the new threat landscape for generative AI-based applications, adapting security testing, risk management, and threat modeling exercises is crucial. We believe that part of the success in effectively mitigating and managing security within the AI development lifecycle will involve integrating threat modeling and offensive security testing, such as Pentesting and Red Team exercises, into a continuously managed, interconnected environment that is appropriately adapted to the development, training, deployment, and compliance of AI applications.
Our experience with architectures and systems integrating AI has enabled us to shift our focus to key aspects that we consider highly relevant for threat modeling in AI solutions, particularly generative models such as LLMs:
- Re-evaluate trust boundaries between systems from a new perspective;
- Identify the model/system actions and how they can impact infrastructure, considering both trust and user profiles;
- Identify all AI dependencies and require transparency throughout the entire supply chain;
- Identify all frontend layers in the supply chain, both in terms of data used and AI models;
Address the latest AI-specific threats.

Figure 4 –IntelligentAST, is the adaptation towards to the new threatscape, integrating seamless and continuously proper threat modeling and offensive security at SDLC. |
References
Act, E. A. (2024). https://artificialintelligenceact.eu. (UE, Editor, & UE, Produtor) Obtido de https://artificialintelligenceact.eu/high-level-summary/: https://artificialintelligenceact.eu/high-level-summary/
Canada, G. (2022). Artificial Intelligence and Data Act. Obtido de Artificial Intelligence and Data Act: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
ENISA. (Março de 2023). ENISA Foresight Cybersecurity Threats for 2030. ENISA Foresight Cybersecurity Threats for 2030.
EUA, C. B. (2022). Blueprint for an AI Bill of Rights. Obtido de Blueprint for an AI Bill of Rights: https://www.whitehouse.gov/ostp/ai-bill-of-rights/
EUA, C. B. (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Obtido de Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Europeia, C. (2019). Orientações éticas para uma IA de confiança. Obtido de Orientações éticas para uma IA de confiança: https://digital-strategy.ec.europa.eu/pt/library/ethics-guidelines-trustworthy-ai
PDPC. (2020). Model AI Governance Framework. Obtido de Model AI Governance Framework 2nd edition: https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf
Reino-Unido. (2023). Parliamentary Bills. Obtido de Artificial Intelligence (Regulation) Bill [HL]: https://bills.parliament.uk/bills/3519
Synoptek. (2023). AI, ML, DL, and Generative AI Face Off: A Comparative Analysis. Obtido de Synoptek: https://c.com/insights/it-blogs/data-insights/ai-ml-dl-and-generative-ai-face-off-a-comparative-analysis/
UE. (2018). Regulamento Geral da Proteção de Dados. Obtido de https://eur-lex.europa.eu/PT/legal-content/summary/general-data-protection-regulation-gdpr.html