top of page

What Is Symbolic AI? Complete 2026 Guide

  • 5 days ago
  • 29 min read
Symbolic AI concept image with logic diagrams and the title “What Is Symbolic AI?”

For decades, researchers dreamed of a machine that could think. Not guess, not pattern-match — but actually reason. Derive conclusions from facts. Apply rules. Explain its decisions in plain language. That dream produced one of the most important branches of artificial intelligence ever built: Symbolic AI. Today, when most headlines obsess over neural networks and large language models, Symbolic AI quietly powers legal tech platforms, medical decision tools, tax compliance systems, and knowledge graphs used by billions of people. It never went away. And in 2026, as AI reliability becomes a boardroom-level concern, it may be more relevant than ever.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

TL;DR

  • Symbolic AI represents knowledge through explicit symbols, rules, and logic — not through patterns in data.

  • It was the dominant approach to AI from the 1950s through the 1980s, producing expert systems and formal reasoning engines.

  • Its key strengths are interpretability, logical consistency, and low data requirements.

  • Its key weaknesses are brittleness, difficulty handling ambiguity, and the burden of hand-coding knowledge.

  • Symbolic AI never died; it powers rules engines, knowledge graphs, compliance tools, and planning systems today.

  • Neuro-symbolic AI — combining neural networks with symbolic reasoning — is an active research frontier in 2026.


What is symbolic AI?

Symbolic AI is an approach to artificial intelligence that represents knowledge using explicit symbols, rules, and logical relationships. A symbolic AI system reasons by applying rules to known facts to derive new conclusions. It is interpretable, logic-driven, and does not require large datasets to function — unlike machine learning.





The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

Table of Contents

1. Simple Definition

Symbolic AI is an approach to building intelligent systems that represents knowledge as explicit symbols, rules, and logical relationships. A symbolic AI system reasons by manipulating those symbols according to defined rules — just as a mathematician manipulates equations, or a lawyer applies statutes to facts.


Unlike modern machine learning, a symbolic system does not learn from data. It starts with human-encoded knowledge — facts, categories, relationships, and rules — and uses logical inference to derive answers.


The term "symbolic" distinguishes this approach from "sub-symbolic" approaches like neural networks, which encode knowledge implicitly in billions of numerical weights rather than in readable rules.


Symbolic AI is also called classical AI, rule-based AI, logic-based AI, and — with a hint of retrospect — Good Old-Fashioned AI (GOFAI), a phrase coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

2. What "Symbolic" Means

A symbol in this context is a discrete, meaningful token that represents something in the world. Words are symbols. Numbers are symbols. The concept "mammal" is a symbol. So is "Paris," "is-capital-of," and "greater-than."


Symbols are meaningful by design. The symbol MAMMAL in a knowledge base refers to a category of warm-blooded vertebrates with hair. The system knows this because a human encoded that definition. The symbol is not an arbitrary floating-point number — it is a named, interpretable token.


The power of symbolic reasoning comes from combining symbols according to logical rules. Consider the classic syllogism:

Fact 1:   Socrates is a human.
Fact 2:   All humans are mortal.
Conclusion: Socrates is mortal.

A symbolic AI system can derive the conclusion from the two facts automatically, using standard logical inference. No training data required. No gradient descent. Just rules applied to facts.


Symbols represent entities (Socrates, Paris), categories (human, city), properties (mortal, beautiful), and relationships (is-a, located-in, caused-by). Together, these form a structured representation of knowledge about the world.


This is fundamentally different from how a neural network operates. A neural network processing the word "Socrates" sees a high-dimensional numerical embedding — a point in vector space shaped by patterns across billions of text documents. There is no explicit Socrates is mortal rule anywhere. The network may produce correct answers, but the reasoning path is not visible or auditable.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

3. How Symbolic AI Works


A symbolic AI system has several core components:


Knowledge Base

The knowledge base stores facts and rules about a domain. Facts describe what is true: "Aspirin is an anti-inflammatory." Rules describe what follows from what: "If a patient has a fever AND headache, consider influenza."


The inference engine applies rules to facts to derive new conclusions. It systematically searches through the knowledge base, chaining rules together until it reaches an answer or exhausts its options.


Ontology

An ontology defines the categories, properties, and relationships in a domain. It answers structural questions: What kinds of things exist? How are they related? What properties do they have? Ontologies are the scaffolding on which facts hang.


Search and Planning

Many symbolic AI problems require searching through a space of possible states or actions. A planning system might represent a goal (deliver a package to Room 302) and a set of possible actions (walk, pick up, put down), then search for a valid sequence of actions that achieves the goal.


Explanation Module

Because every conclusion follows from explicit rules, a symbolic system can trace its reasoning back to the original facts and rules it used. This is a built-in explanation capability — something neural networks lack natively.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

4. A Concrete Example


Consider a simplified loan approval system built with symbolic AI.


Facts loaded into the system:

applicant(john_smith).
credit_score(john_smith, 720).
annual_income(john_smith, 65000).
debt_to_income_ratio(john_smith, 0.28).
employment_status(john_smith, employed).
loan_amount_requested(john_smith, 200000).

Rules in the knowledge base:

Rule 1: IF credit_score >= 680 AND employment_status = employed
        THEN eligible_for_standard_loan = true

Rule 2: IF eligible_for_standard_loan = true
           AND debt_to_income_ratio <= 0.43
           AND (annual_income * 4) >= loan_amount_requested
        THEN approve_loan = true

Rule 3: IF approve_loan = true
        THEN assign_interest_rate = 6.5%

Inference steps:

  1. John's credit score is 720 (≥ 680) and he is employed → Rule 1 fires → eligible_for_standard_loan = true

  2. DTI is 0.28 (≤ 0.43); income × 4 = $260,000 ≥ $200,000 requested → Rule 2 fires → approve_loan = true

  3. Rule 3 fires → interest rate assigned at 6.5%


The system can explain its decision step by step: "Loan approved because credit score exceeded threshold, employment status confirmed, debt-to-income ratio is within limits, and income supports the loan amount. Interest rate set per standard tier policy."


This is auditable, repeatable, and understandable. A compliance officer can review every rule. A regulator can trace every decision.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

5. History of Symbolic AI


The Birth of a Field (1956)

Artificial intelligence as a formal discipline began at the Dartmouth Summer Research Project on Artificial Intelligence in 1956. Organized by John McCarthy (who coined the term "artificial intelligence"), Marvin Minsky, Claude Shannon, and Nathaniel Rochester, the conference gathered around a bold hypothesis: every feature of human intelligence could, in principle, be precisely described and simulated by a machine.


The early researchers believed intelligence was fundamentally about reasoning — applying logic to symbols. This assumption was not arbitrary. Formal logic had already proven itself in mathematics and philosophy. If a machine could manipulate logical symbols the way mathematicians manipulate equations, surely it could think.


Logic and Early Programs (1956–1965)

Allen Newell and Herbert Simon built the Logic Theorist in 1956 — widely considered the first AI program. It proved 38 of the 52 theorems in Whitehead and Russell's Principia Mathematica. In 1957, Newell and Simon followed with the General Problem Solver, a program designed to simulate general human problem-solving through means-ends analysis.


John McCarthy developed Lisp in 1958, which became the dominant language for AI research for decades. McCarthy also developed situation calculus, a formal logic system for reasoning about actions and change.


In 1965, DENDRAL was developed at Stanford by Edward Feigenbaum, Joshua Lederberg, and Bruce Buchanan. It was the first expert system — a program that used domain-specific rules to identify chemical compounds from mass spectrometry data. DENDRAL demonstrated that narrow, deep expertise could be encoded in rules and made computationally useful.


The Rise of Expert Systems (1970s–1980s)

The 1970s and 1980s were the golden age of expert systems — programs that captured the knowledge of human specialists and applied it automatically.


MYCIN (1972–1974, Stanford) diagnosed bacterial infections and recommended antibiotics with an accuracy that rivaled infectious disease specialists in controlled trials. It used around 600 rules of the form "IF symptom A AND test result B THEN suspect pathogen C with confidence X."


XCON (R1), developed at Carnegie Mellon University in 1980 for Digital Equipment Corporation, configured VAX computer systems. By the mid-1980s, it was processing tens of thousands of orders per year and saving DEC an estimated $25 million annually (McDermott, 1982, Artificial Intelligence).


Industry investment surged. Corporations built expert system shells. AI became a commercial market. The hype was real — and so were some of the results.


The AI Winters (1974–1980, 1987–1993)


Progress was not linear. Symbolic AI hit hard limits.


The first AI winter began around 1974, triggered partly by the Lighthill Report (1973), in which British mathematician James Lighthill concluded that AI had failed to live up to its promises — particularly in machine translation and general problem-solving. UK government funding collapsed.


The second AI winter arrived in the late 1980s and early 1990s. Expert systems proved brittle and expensive to maintain. Rules needed constant updating by rare, expensive knowledge engineers. The systems could not learn. They could not handle unexpected inputs. They failed outside their narrow domains.


In 1987, the market for Lisp machines — specialized hardware for symbolic AI — collapsed virtually overnight as cheaper general-purpose computers rendered them uncompetitive. Investment dried up. The field contracted sharply.


The Statistical Turn (1990s–2010s)

Through the 1990s and 2000s, statistical and machine learning methods steadily outperformed symbolic approaches on benchmarks in speech recognition, computer vision, and natural language processing. The rise of large datasets and computational power made data-driven learning feasible at scale.


When deep learning achieved breakthrough results — notably AlexNet winning the ImageNet competition in 2012 with a top-5 error rate of 15.3% vs. the next best 26.2% — the narrative in AI research shifted decisively toward neural approaches.


But "symbolic AI is dead" was never accurate. It retreated from research headlines, not from production systems.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

6. Key Concepts in Symbolic AI

Concept

Definition

Symbol

A discrete, named token representing an entity, category, property, or relationship

Fact

A statement asserted as true in the knowledge base

Rule

A conditional statement: IF condition(s) THEN conclusion

Predicate

A function that returns true or false: is_mammal(dog) = true

Inference

Deriving new facts from existing facts and rules

Deduction

Drawing certain conclusions from premises

Induction

Inferring general rules from specific observations

Abduction

Inferring the most likely explanation for an observation

Knowledge base

The repository of facts and rules

Ontology

A formal model of categories, properties, and relationships in a domain

Semantic network

A graph where nodes are concepts and edges are relationships

Production system

A rule-based architecture of IF-THEN productions

Constraint satisfaction

Finding values that satisfy a set of constraints

Automated theorem proving

Using logic to prove mathematical or logical statements

Planning

Finding a sequence of actions to achieve a goal

Forward vs. Backward Chaining

Forward chaining starts from known facts and applies rules to derive new conclusions until the goal is reached. It works outward from data. "I have symptoms A, B, and C → what disease does this suggest?"


Backward chaining starts from a goal and works backward to find the facts that would support it. "Is this disease possible? → What symptoms would confirm it? → Does the patient have those symptoms?"


Most expert systems use one or both.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

7. Symbolic AI vs Machine Learning

Dimension

Symbolic AI

Machine Learning

Knowledge source

Human-encoded rules

Patterns learned from data

Interpretability

High — rules are readable

Low to medium — models are often opaque

Data requirements

Low — rules replace data

High — requires large labeled datasets

Flexibility

Low — rules must be manually updated

High — adapts as data changes

Reasoning ability

Strong — can chain rules, apply logic

Weak — statistical associations, not logic

Generalization

Brittle beyond defined rules

Good interpolation within training distribution

Handling ambiguity

Poor

Better, but imperfect

Maintenance

High — knowledge engineers needed

Medium — retraining required

Explainability

Native

Requires additional techniques (XAI)

Performance on perception tasks

Poor

Excellent

Performance in rule-heavy domains

Excellent

Inconsistent

The key distinction: symbolic AI encodes what to think; machine learning discovers what to think from examples. Both have domains where they excel. Neither is universally superior.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

8. Symbolic AI vs Neural Networks

Neural networks represent knowledge implicitly — distributed across millions or billions of numerical weights shaped by training. No single weight says "dogs are mammals." That fact emerges statistically from the network's exposure to text and images containing both concepts.


Symbolic systems represent the same fact explicitly: is_a(dog, mammal) — readable, editable, auditable.


The practical implications are significant:


Neural networks excel at:

  • Image recognition

  • Speech recognition

  • Language generation

  • Pattern recognition in noisy, high-dimensional data

  • Tasks requiring generalization from large datasets


Symbolic systems excel at:

  • Logical deduction from defined rules

  • Tasks where every step needs to be explainable

  • Domains with strict compliance requirements

  • Environments where data is scarce but expertise is available

  • Problems requiring formal verification


The contrast is not a competition — it is a specialization. A self-driving car uses neural networks for perception (recognizing pedestrians, road markings, other vehicles) and can benefit from symbolic planning for route navigation and traffic rule compliance. These are complementary, not competing.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

9. Strengths of Symbolic AI

Interpretability. Every decision traces back to explicit rules. "Loan denied because debt-to-income ratio exceeded 43% threshold per Rule 14, Section 3." No black-box mystery.


Low data requirements. Expert knowledge can be encoded directly. You do not need ten million labeled examples to build a tax compliance checker — you need accurate rules.


Logical consistency. A symbolic system that follows valid rules will not contradict itself (assuming the rule base is consistent). It will give the same answer to the same input every time.


Regulatory suitability. Financial services, healthcare, and legal systems often require explainable decisions. Symbolic AI delivers this natively.


Controlled knowledge updates. Adding a new rule or modifying an existing one is targeted and understandable. You change one rule; you know exactly what changes.


Formal verification. Symbolic systems can be mathematically proven to satisfy specifications — critical in aerospace, nuclear systems, and safety-critical software.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

10. Limitations of Symbolic AI

The knowledge acquisition bottleneck. Getting knowledge from human experts into a rule base is slow, expensive, and error-prone. Experts often struggle to articulate their own reasoning. This was one of the central failures of 1980s expert systems.


Brittleness. Symbolic systems fail silently at the edges of their knowledge. A medical expert system trained on adult patients may give dangerous answers when applied to pediatric cases if those cases weren't explicitly modeled.


Difficulty with ambiguity and uncertainty. Natural language is ambiguous. The real world is probabilistic. Pure symbolic logic handles neither well without extensions like probabilistic logic networks or fuzzy logic.


Poor performance on perception tasks. Recognizing that a photograph contains a dog requires processing raw pixels — a task that is trivial for neural networks and extremely hard to encode in rules.


Scalability challenges. As a rule base grows, interactions between rules become complex. Contradictions emerge. Maintenance costs explode. Some production expert systems eventually became unmaintainable.


No autonomous learning. A symbolic system does not improve from experience unless a human updates its rules. It cannot adapt to distributional shift.


Common sense is hard. Humans know that a glass of water sitting on a tilted table will probably spill. Encoding every such implicit real-world regularity into a symbolic system is practically impossible.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

11. The Frame Problem and Common Sense Reasoning

In 1969, John McCarthy and Patrick Hayes identified what became known as the frame problem: in a symbolic AI system that reasons about actions and change, how do you efficiently represent which things don't change when an action occurs?


If a robot picks up a ball, the ball moves. But the color of the ball doesn't change. The temperature of the room doesn't change. The number of chairs in the building doesn't change. Thousands of facts remain the same.


In formal logic, you must either explicitly state all these non-changes (combinatorially expensive) or find an efficient way to default to "unchanged" while still allowing exceptions. Neither is trivial at scale.


The frame problem is an instance of a deeper challenge: common sense reasoning. Humans know implicitly that:

  • Unsupported objects fall

  • Knives are sharp and can cut

  • People sleep roughly once per day

  • Cold weather means wearing more clothes


This knowledge is never explicitly taught. It is absorbed through embodied experience. Encoding it into a symbolic system would require millions of rules — and even then, the coverage would be incomplete.


Douglas Lenat's Cyc project, begun in 1984, attempted exactly this: building a comprehensive common-sense knowledge base from scratch. After decades of work, the system contains over 25 million facts and rules (Cycorp, documentation), yet still struggles with the open-ended variety of real-world situations.


The frame problem and common sense gap remain fundamental obstacles to purely symbolic general intelligence.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

12. Expert Systems

An expert system is a specialized symbolic AI application designed to replicate the decision-making of a human expert in a narrow domain.


Structure of a typical expert system:

  • Knowledge base: Domain-specific facts and IF-THEN rules, sourced from human experts

  • Inference engine: The reasoning mechanism (usually forward or backward chaining)

  • Working memory: Current facts about the specific case being analyzed

  • Explanation module: A facility to trace and explain how a conclusion was reached

  • Knowledge acquisition interface: Tools for experts to update the rule base


Landmark examples:

  • MYCIN (Stanford, 1972–1974): Diagnosed bacterial infections. Studies found it matched or exceeded specialist performance on controlled test cases (Yu et al., 1979, Journal of the American Medical Association).

  • XCON/R1 (Carnegie Mellon/DEC, 1980): Configured computer systems. Saved Digital Equipment Corporation an estimated $25–40 million per year by the mid-1980s (McDermott, 1982).

  • PROSPECTOR (SRI International, 1978): Geological mineral exploration. Successfully predicted a molybdenum deposit in Washington State that was later confirmed by drilling.

  • INTERNIST-1 (University of Pittsburgh, 1974): Internal medicine diagnosis covering over 500 diseases.


Expert systems succeeded in narrow, well-defined domains with stable rules, clear inputs, and measurable outputs. They struggled when rules were ambiguous, when the domain evolved rapidly, or when inputs arrived in messy real-world formats.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

13. Symbolic AI in Natural Language Processing


Before statistical methods dominated, NLP was largely a symbolic enterprise.


Early NLP used:

  • Formal grammars (context-free grammars, transformational grammars)

  • Syntactic parsers that built parse trees from sentences

  • Semantic networks that mapped sentence meanings to logical forms

  • Hand-coded lexicons and morphological analyzers


Systems like SHRDLU (Terry Winograd, MIT, 1970) could understand and respond to natural language commands in a simulated block-stacking world using a symbolic parser combined with a knowledge base. It was impressive within its microworld but could not generalize.


Where symbolic NLP still matters in 2026:

  • Grammar checking: Rule-based systems remain part of commercial grammar checkers because they can apply specific grammatical rules consistently.

  • Information extraction: Named entity recognition, relation extraction, and slot-filling systems often use rule-based components alongside learned models.

  • Controlled natural language: Safety-critical industries (aviation, pharmaceuticals) sometimes require documents written in restricted subsets of language that can be formally processed.

  • Semantic parsing: Converting natural language queries into structured database queries (SQL, SPARQL) often benefits from symbolic components.

  • Regulatory document analysis: Legal and compliance text analysis uses symbolic rules to flag specific clause structures.


The dominant paradigm in NLP is now neural, but symbolic scaffolding often provides structure, constraints, and interpretability that pure neural approaches lack.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

14. Symbolic AI in Robotics and Planning

AI planning is the branch of symbolic AI concerned with finding sequences of actions to achieve goals.


A planning system typically represents:

  • States: Descriptions of the world at a point in time

  • Actions: Operators that change state, with preconditions (what must be true before the action) and effects (what becomes true/false after)

  • Goals: Desired states to achieve


The STRIPS planning language (Fikes and Nilsson, SRI International, 1971) was an early formalization. Its modern descendants include PDDL (Planning Domain Definition Language), which remains a standard in academic AI planning research.


Example — simple robot planning:


A robot must deliver a document from Room A to Room B.

Actions:
- PickUp(document): Precondition: robot is in same room as document.
                    Effect: robot is holding document.
- MoveToRoom(B):    Precondition: robot is not holding anything heavy.
                    Effect: robot is in Room B.
- PutDown(document): Precondition: robot is holding document, robot is in Room B.
                     Effect: document is in Room B.

A planning algorithm searches through the action space to find: PickUp → MoveToRoom(B) → PutDown.


Symbolic planning is powerful in structured environments with clear rules. It struggles in physical reality, where sensor noise, unpredictable objects, and continuous variables complicate the neat discrete-state assumption.


Modern robotics systems often use symbolic planners for high-level task planning (what to do next) while using neural or classical control algorithms for low-level execution (how to physically move).


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

15. Knowledge Representation

Knowledge representation is the study of how to encode information about the world so that a computer can use it for reasoning. It is the foundational problem of symbolic AI.


Forms of knowledge representation:


Logic-based representations use formal languages (propositional logic, first-order logic, description logic) to state facts and rules with precise semantics. They support rigorous inference but can be computationally expensive.


Frames (Minsky, 1975) represent knowledge as structured objects with slots and fillers — essentially objects with properties. Car is a frame with slots for color, make, owner, speed. Frames are intuitive and map naturally to object-oriented programming.


Semantic networks represent knowledge as graphs where nodes are concepts and edges are relationships. Dog → is-a → Mammal, Fido → is-instance-of → Dog. They are visually intuitive and efficient for inheritance reasoning.


Ontologies are formal, explicit specifications of conceptualizations of a domain (Gruber, 1993). They define classes, properties, relationships, and constraints. OWL (Web Ontology Language) is the W3C standard for building machine-readable ontologies, widely used in the Semantic Web and bioinformatics.


Knowledge graphs are large-scale graph databases encoding real-world entities and their relationships, combining elements of semantic networks and ontologies at scale.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

16. Inference and Reasoning


Deductive Reasoning

Deduction derives certain conclusions from premises using valid rules of inference. If premises are true and the inference is valid, the conclusion must be true.

All mammals are warm-blooded.
Dogs are mammals.
Therefore, dogs are warm-blooded. [Certain]

Inductive Reasoning

Induction generalizes from specific observations to broader patterns. The conclusion is probable, not certain.

Observed: Every swan we have seen is white.
Conclusion: All swans are white. [Probable — but wrong; black swans exist]

Abductive Reasoning

Abduction infers the most likely explanation for an observation.

Observation: The patient has a fever and a rash.
Best explanation: The patient may have measles.

Abduction is central to diagnostic reasoning in medicine and fault detection in engineering.


Non-Monotonic Reasoning

In classical logic, once a fact is true, it stays true. Non-monotonic reasoning allows conclusions to be withdrawn when new information arrives. "Birds fly. Tweety is a bird. Therefore Tweety flies." But: "Tweety is a penguin. Penguins don't fly. Therefore Tweety does not fly." The earlier conclusion is retracted.


Default logic, circumscription, and answer set programming are formal frameworks for non-monotonic reasoning.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

17. Symbolic AI and Knowledge Graphs

A knowledge graph is a structured representation of real-world entities (people, places, organizations, products, concepts) and the relationships between them, stored as a graph where nodes are entities and edges are relationships.


Examples:

  • Google Knowledge Graph (launched 2012): Powers the "knowledge panel" in Google Search — those information boxes showing structured facts about people, places, and organizations. Estimated to contain hundreds of billions of facts.

  • Wikidata (launched 2012, Wikimedia Foundation): A free, collaborative knowledge graph with over 100 million items as of 2024, used by Wikipedia, Siri, and research tools worldwide.

  • Microsoft's Satori: Powers Bing search results.

  • Amazon Product Graph: Structures product data and relationships for e-commerce.


Knowledge graphs are symbolic structures. Each edge asserts a fact: (Barack Obama, born-in, Honolulu). The graph can be queried with logical precision: "List all US presidents born after 1945 who served two terms."


Knowledge graphs are important in symbolic AI because they:

  • Enable structured query and retrieval

  • Support reasoning through relationship traversal

  • Provide grounded factual anchors for neural AI systems

  • Connect entities across data silos in enterprise settings


The combination of knowledge graphs with neural language models — used in enterprise search, chatbots, and question-answering systems — is one of the most active practical applications of hybrid symbolic-neural AI in 2026.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

18. Symbolic AI Today

Symbolic AI is not historical. In 2026, it operates at scale in a wide range of production systems.


Business rules engines: Tools like Drools (Red Hat), IBM Operational Decision Manager, and FICO Blaze Advisor allow enterprises to encode and execute complex business rules in areas like insurance underwriting, loan eligibility, and pricing. These are commercial-scale symbolic AI systems.


Legal technology: Contract analysis platforms use rule-based extraction to flag specific clause types, compliance requirements, and risk terms. LexisNexis, Thomson Reuters, and numerous legal tech startups embed symbolic reasoning in their document review pipelines.


Tax and compliance: Tax calculation engines (used by H&R Block, TurboTax, and enterprise ERP systems) are symbolic rule engines implementing tax codes in machine-executable form.


Medical decision support: Clinical decision support systems (CDSS) such as those embedded in Epic and Cerner EHR platforms use symbolic rules to flag drug interactions, dosing thresholds, and diagnostic criteria.


Formal verification: In safety-critical software development (aerospace, automotive, nuclear), symbolic model checkers like SPIN and NuSMV verify that software designs satisfy formal specifications before code is deployed.


Semantic web and linked data: The W3C's OWL and RDF standards, used in bioinformatics, government open data, and enterprise data integration, are symbolic knowledge representation systems in active deployment.


Automated planning in logistics: Companies like FedEx, UPS, and Amazon use variants of automated planning and optimization — symbolic and hybrid approaches — to route packages, schedule deliveries, and manage warehouse operations.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

19. Neuro-Symbolic AI

Neuro-symbolic AI combines neural networks (good at perception, pattern recognition, and language) with symbolic reasoning (good at logic, rules, planning, and explainability).


The core idea: use neural networks where data is messy and rules are unknown; use symbolic systems where logic is required and decisions must be explainable.


Research directions in 2026:


Perception + reasoning: Neural models process raw inputs (images, audio, text) and output structured symbolic representations. Symbolic reasoners then apply rules to those representations. A medical imaging system might use a neural network to identify lesion candidates, then a symbolic rule engine to apply diagnostic criteria.


LLMs with symbolic tools: Large language models are paired with external knowledge graphs, rule engines, calculators, and formal logic systems to ground their outputs. This addresses hallucination — a known failure mode where LLMs generate plausible-sounding false information — by anchoring responses to verified facts.


Neural theorem proving: Neural networks are trained to guide symbolic theorem provers, making formal verification faster. DeepMind's AlphaProof and AlphaGeometry systems (2024) demonstrated that neural guidance can significantly improve performance on formal mathematical reasoning tasks.


Differentiable programming: Some researchers are building neural architectures that can perform symbolic operations (sorting, counting, logical operations) as differentiable computations, allowing end-to-end training.


IBM Neuro-Symbolic AI program: IBM Research has published extensively on systems that integrate neural and symbolic components for tasks like question answering, visual reasoning, and commonsense inference.


The challenge in neuro-symbolic AI is the interface problem: neural outputs are continuous and probabilistic; symbolic systems need discrete, certain inputs. Bridging this cleanly is an active research challenge without a dominant solution as of 2026.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

20. Symbolic AI and Large Language Models

Large language models like GPT-4, Claude, and Gemini are statistical models. They do not reason symbolically in the classical sense. They predict likely next tokens based on patterns learned from enormous corpora of text.


When an LLM produces a chain-of-thought explanation, it is generating text that looks like logical reasoning. The model learned that step-by-step reasoning patterns are associated with correct answers in training data. This is a powerful statistical mimic of reasoning — not formal symbolic inference.


Key differences:

Property

LLM

Symbolic AI

Knowledge encoding

Statistical, in weights

Explicit, in symbols/rules

Reasoning

Pattern-based mimicry

Formal deduction

Verifiability

Hard to audit

Fully auditable

Consistency

Variable

Deterministic

Hallucination

Frequent

Does not apply (errors are rule errors)

Data requirements

Enormous

Low

How LLMs benefit from symbolic components:

  • Retrieval-augmented generation (RAG): LLMs query structured knowledge bases or vector stores to ground their responses in verified facts.

  • Tool use: LLMs call external calculators, code interpreters, APIs, and database engines — delegating symbolic tasks to systems built for them.

  • Knowledge graph integration: LLMs are combined with structured knowledge graphs for question answering, reducing hallucination.

  • Formal verifiers: LLM-generated code is checked by symbolic verifiers before deployment.

  • Planning systems: LLMs generate candidate plans; symbolic planners verify their feasibility and logical consistency.


The 2026 production AI stack increasingly looks hybrid: an LLM as a flexible interface layer, with symbolic systems providing structure, verification, and grounding beneath.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

21. Practical Applications

Domain

Application

Why Symbolic AI

Healthcare

Drug interaction checkers

Precise rule-based safety logic

Finance

Credit underwriting

Auditability, regulatory compliance

Legal

Contract clause extraction

Rule-based pattern matching

Tax

Tax calculation engines

Codified tax law

Fraud detection

Rules-based fraud flags

Speed, interpretability

Manufacturing

Configuration systems

Constraint satisfaction

Logistics

Route planning

Optimization over defined constraints

Gaming

Game AI behavior trees

Deterministic NPC logic

Education

Intelligent tutoring systems

Rule-based feedback

Cybersecurity

Intrusion detection rule sets

Pattern matching on known signatures

Search engines

Structured data extraction

Knowledge graph queries

Robotics

Task planning

Formal action representation

Formal verification

Safety-critical software checks

Mathematical proof of properties

Customer support

Decision tree bots

Interpretable escalation logic


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

22. When to Use (and Not Use) Symbolic AI


Use Symbolic AI when:

  • Rules are known and can be explicitly stated

  • Decisions must be explainable to regulators, users, or auditors

  • Training data is scarce or expensive

  • The domain requires logical consistency (no contradictions)

  • Formal verification of behavior is required

  • Expert knowledge is available and stable

  • The environment is structured and well-defined

  • Constraints must be enforced absolutely (e.g., safety rules)


Avoid Symbolic AI when:

  • The task requires processing raw perception data (images, audio, video)

  • Rules are unknown and must be inferred from data

  • The environment is highly ambiguous or unstructured

  • The domain changes so rapidly that manual rule updates can't keep pace

  • The problem requires generalizing from complex, high-dimensional patterns

  • Common sense reasoning about open-ended situations is required


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

23. Common Misconceptions

"Symbolic AI is obsolete." False. Symbolic AI powers production systems at scale across finance, healthcare, legal tech, logistics, and enterprise software. It did not disappear — it went out of research fashion while remaining essential in practice.


"Machine learning replaced symbolic reasoning." Machine learning replaced symbolic AI in many research benchmarks. It did not replace it in regulated industries, safety-critical systems, or domains that require auditability.


"Symbolic rules are always simple." Modern business rules engines like IBM ODM manage thousands of interdependent rules encoding complex logic. Tax codes alone span thousands of pages of conditional logic.


"Symbolic AI cannot be used with neural networks." Neuro-symbolic AI is an active research and product development area. LLMs are routinely paired with knowledge graphs, rule engines, and formal verifiers in production.


"LLMs make symbolic reasoning unnecessary." LLMs hallucinate. They are inconsistent across runs. They cannot guarantee logical correctness. For applications where accuracy, consistency, and auditability matter, symbolic components are not optional.


"Symbolic AI is only academic." FICO's credit scoring infrastructure, enterprise ERP compliance engines, and hospital drug interaction checkers are symbolic AI. These systems are about as far from academic as systems can get.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

24. Symbolic AI, Explainability, and Trust

One of the most urgent problems in AI in 2026 is explainability: the ability of a system to justify its decisions in terms humans can understand and evaluate.


Neural networks are notoriously opaque. Explainable AI (XAI) techniques like LIME, SHAP, and attention visualization provide post-hoc approximations of what a neural model "attended to" — but they are approximations, not ground truth explanations.


Symbolic AI offers native explainability. Every conclusion has a proof tree: a traceable chain from input facts through rules to output. This is not an add-on — it is a structural property of the architecture.


This matters in regulated domains. The EU's AI Act (entered into application in 2024–2026) requires that certain high-risk AI systems provide meaningful explanations for automated decisions affecting individuals. Symbolic systems are well-positioned to satisfy these requirements. Neural systems require additional XAI scaffolding that may not meet the legal threshold.


Important caveat: Explainability is not the same as correctness. A symbolic system can give a fully traceable, clearly wrong answer if its rules are incorrect, incomplete, or outdated. Transparency about reasoning does not substitute for accurate rules.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

25. Technical Deep Dive


First-Order Logic (FOL)

First-order logic extends propositional logic with variables, predicates, quantifiers, and functions — making it expressive enough to represent most factual knowledge.

∀x: Human(x) → Mortal(x)        [All humans are mortal]
Human(Socrates)                   [Socrates is human]
∴ Mortal(Socrates)               [Therefore Socrates is mortal]

Predicates like Human(x) and Mortal(x) take arguments. Quantifiers ∀ (for all) and ∃ (there exists) allow statements about classes of objects, not just individuals.


FOL is the backbone of most symbolic AI knowledge representation.


Description Logics

Description logics (DL) are a family of knowledge representation languages that balance expressiveness with computational tractability. OWL (Web Ontology Language) is based on DL.


A DL knowledge base has:

  • A TBox (terminological box): definitions of concepts and their relationships

  • An ABox (assertional box): facts about specific individuals


DL reasoners can automatically classify new instances, check consistency, and infer implied facts from an OWL ontology.


Search Algorithms

Symbolic AI problems are often framed as search problems: find a path from an initial state to a goal state.

  • Breadth-first search: Explores all states at depth N before depth N+1. Guaranteed to find shortest path. Slow.

  • Depth-first search: Explores one branch deeply before backtracking. Faster but may not find optimal solution.

  • A* search: Uses a heuristic function to estimate distance to goal. Efficient and optimal if the heuristic is admissible.


PDDL (Planning Domain Definition Language)

PDDL is the standard language for expressing AI planning problems.

(:action pick-up
  :parameters (?x - block)
  :precondition (and (clear ?x) (on-table ?x) (hand-empty))
  :effect (and (holding ?x) (not (clear ?x))
               (not (on-table ?x)) (not (hand-empty))))

This encodes the pick-up action: valid when a block is clear and on the table with a free hand; results in holding the block and removing it from the table.


Constraint Satisfaction Problems (CSPs)

A CSP has variables, domains (possible values), and constraints (rules about valid combinations). Solving it means finding an assignment of values that satisfies all constraints.


Sudoku is a CSP. Scheduling problems are CSPs. Configuration problems — which components can go into which products — are CSPs. Constraint solvers are sophisticated symbolic AI engines used widely in logistics, finance, and manufacturing.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

26. Future of Symbolic AI

Pure symbolic AI will not power general-purpose AI systems. The knowledge acquisition bottleneck, the frame problem, and the common sense gap make that implausible for open-ended tasks.


But symbolic AI is not competing for that role.


In 2026 and beyond, symbolic AI will matter most as a component in larger hybrid systems:

  • Verifying neural outputs — ensuring that LLM-generated code, plans, or decisions satisfy formal specifications

  • Grounding language models — connecting LLM outputs to structured, verified knowledge

  • Enforcing constraints — ensuring that AI-driven decisions in regulated domains comply with explicit rules

  • Planning and scheduling — providing the logical backbone for autonomous agents operating in structured environments

  • Explainability infrastructure — delivering the auditable reasoning trails that regulations increasingly require


The field of neuro-symbolic AI remains one of the most technically ambitious frontiers in AI research. The integration challenge is real. But the motivation is compelling: systems that can both perceive and reason, both learn and explain, both generalize and comply.


The future of reliable AI almost certainly involves symbolic components. The systems we trust with medical diagnoses, legal decisions, financial compliance, and safety-critical operations need to be both intelligent and accountable. That combination requires more than pattern matching. It requires reasoning.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

FAQ


What is Symbolic AI in simple terms?

Symbolic AI is an approach to artificial intelligence that represents knowledge as explicit symbols, rules, and logical relationships. Instead of learning from data, it reasons by applying rules to known facts. It can explain its decisions step by step because every conclusion follows from traceable logic.


Is Symbolic AI still used today?

Yes. Symbolic AI is used in business rules engines, tax calculation software, drug interaction checkers, legal tech platforms, knowledge graphs, compliance systems, and formal verification tools. It powers critical infrastructure in finance, healthcare, and logistics.


How is Symbolic AI different from machine learning?

Machine learning discovers patterns from data without explicit rules. Symbolic AI uses hand-coded rules and logic rather than learning. Machine learning is better at perception tasks and messy real-world data; symbolic AI is better at logical reasoning, auditability, and domains with known, stable rules.


Is Symbolic AI the same as rule-based AI?

Rule-based AI is the most common form of Symbolic AI, but symbolic AI is broader. It also includes automated theorem proving, ontology reasoning, constraint satisfaction, semantic networks, and planning systems — not all of which are purely rule-based.


What is an example of Symbolic AI?

A tax calculation engine that applies IRS rules to income data is Symbolic AI. A drug interaction checker that flags dangerous combinations based on pharmacological rules is Symbolic AI. A knowledge graph that stores and queries facts about entities and their relationships is Symbolic AI.


Why did Symbolic AI decline?

The main reasons were the knowledge acquisition bottleneck (it's expensive to encode expert knowledge in rules), brittleness (systems failed outside their defined scope), the inability to handle ambiguity and uncertainty well, and the rise of machine learning methods that performed better on perception and pattern-matching benchmarks.


What is GOFAI?

GOFAI stands for "Good Old-Fashioned AI," a term coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea. It refers to the classical symbolic approach to AI dominant from the 1950s to the 1980s, characterised by explicit rule-based reasoning.


What is neuro-symbolic AI?

Neuro-symbolic AI combines neural networks (good at perception and pattern recognition) with symbolic reasoning systems (good at logic, rules, and explainability). The goal is to get the benefits of both: systems that can learn from messy real-world data and still reason rigorously and transparently.


Are knowledge graphs Symbolic AI?

Yes. Knowledge graphs are a form of symbolic AI. They encode facts about real-world entities as structured symbolic relationships (triples of subject–predicate–object) and support logical query and reasoning over those facts.


Can Symbolic AI learn?

Not autonomously. Classical symbolic systems do not improve from experience — humans must update the rules. Some hybrid approaches allow learned components (e.g., inductive logic programming can learn rules from examples), but this is not characteristic of mainstream symbolic AI.


Are large language models Symbolic AI?

No. LLMs are statistical models that predict text based on learned patterns. They do not reason symbolically in the formal sense. They can generate text that looks like logical reasoning, but this is pattern mimicry, not formal inference. LLMs can be combined with symbolic components to improve accuracy and verifiability.


What are the main limitations of Symbolic AI?

The main limitations are: the difficulty and cost of encoding knowledge (knowledge acquisition bottleneck), brittleness when inputs fall outside the defined scope, inability to handle perceptual data well, poor scaling to common-sense reasoning, and the maintenance burden as rule bases grow large.


What domains are best suited for Symbolic AI?

Domains with explicit rules, low ambiguity, compliance requirements, and high explanability needs: tax and financial regulation, medical decision support, legal reasoning, manufacturing configuration, formal software verification, and logistics planning.


What is the frame problem?

The frame problem, identified by McCarthy and Hayes in 1969, is the challenge of efficiently representing which facts don't change when an action occurs. In formal logic, representing all the non-effects of an action is computationally intractable at scale and is a fundamental obstacle for symbolic AI in dynamic real-world settings.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

Key Takeaways

  • Symbolic AI represents knowledge as explicit symbols, rules, and logical relationships — not as learned numerical patterns.


  • It was the dominant approach to AI from the 1950s through the 1980s, producing expert systems that achieved real commercial value.


  • Its core strengths are interpretability, logical consistency, low data requirements, and suitability for regulated domains.


  • Its core weaknesses are the knowledge acquisition bottleneck, brittleness at domain boundaries, and difficulty with perception and common sense.


  • Symbolic AI never disappeared; it powers business rules engines, compliance systems, knowledge graphs, and planning systems at production scale today.


  • Neuro-symbolic AI — combining neural and symbolic components — is a major research frontier addressing the limitations of both pure approaches.


  • LLMs are not symbolic AI, but they are increasingly paired with symbolic components (knowledge graphs, rule engines, verifiers) to improve reliability.


  • In regulated industries, symbolic AI's native explainability addresses legal and ethical requirements that neural systems must work harder to satisfy.


  • The future of trustworthy AI will likely require both learned and symbolic components working together.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

Actionable Next Steps

  1. If you are evaluating AI for a compliance-heavy domain (finance, healthcare, legal), audit whether your current or proposed AI system can produce a traceable explanation for every decision. If not, assess whether a symbolic component is needed.


  2. If you are building an AI system that must be auditable, consider a rules engine like Drools, IBM ODM, or FICO Blaze Advisor for the logic layer, with a neural model handling only the parts requiring perception or language understanding.


  3. If you are a developer exploring the field, study Prolog, OWL/RDF, and PDDL — the practical languages of symbolic AI — to understand how knowledge is represented and queried formally.


  4. If you are using an LLM in production, evaluate where hallucination or logical inconsistency creates risk. Assess whether knowledge graph grounding, retrieval augmentation, or rule-based verification should be added.


  5. If you are a student or researcher, explore neuro-symbolic AI literature — IBM Research, MIT-IBM Watson AI Lab, and DeepMind's work on AlphaProof are strong starting points.


  6. If you are assessing AI regulatory compliance under the EU AI Act or similar frameworks, identify which decision points require meaningful explanations and design symbolic scaffolding to satisfy those requirements.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

Glossary

  1. Abduction: Reasoning to the most likely explanation for an observed fact.

  2. Backward chaining: A reasoning strategy that starts from a goal and works backward to find supporting facts.

  3. Constraint satisfaction problem (CSP): A problem defined by variables, domains, and constraints; solved by finding assignments that satisfy all constraints.

  4. Deduction: Drawing certain conclusions from premises using valid logical rules.

  5. Description logic: A family of logic-based knowledge representation languages with controlled expressiveness; the basis of OWL.

  6. Expert system: A symbolic AI system that encodes domain-specific expertise in rules and applies an inference engine to derive recommendations.

  7. First-order logic (FOL): A formal logical language with predicates, variables, quantifiers, and functions; the foundation of most symbolic AI knowledge representation.

  8. Forward chaining: A reasoning strategy that starts from known facts and applies rules to derive new conclusions.

  9. Frame problem: The challenge of representing which facts remain unchanged after an action occurs in a formal logical system.

  10. GOFAI: Good Old-Fashioned AI; a retrospective term for classical symbolic AI approaches.

  11. Inference engine: The component of a symbolic AI system that applies rules to facts to derive conclusions.

  12. Knowledge base: A structured repository of facts and rules in a symbolic AI system.

  13. Knowledge graph: A large-scale graph database encoding real-world entities and their relationships as subject–predicate–object triples.

  14. Neuro-symbolic AI: AI approaches that combine neural networks with symbolic reasoning systems.

  15. Non-monotonic reasoning: Reasoning that allows previously drawn conclusions to be retracted when new information arrives.

  16. Ontology: A formal, explicit specification of the concepts, properties, and relationships in a domain.

  17. Predicate: A function returning true or false; represents properties or relationships: is_mammal(dog).

  18. Production system: A rule-based architecture consisting of IF-THEN rules (productions) applied to working memory.

  19. Semantic network: A graph-based knowledge representation where nodes are concepts and edges are relationships.

  20. Symbolic AI: An approach to AI that represents knowledge as explicit, discrete symbols, rules, and logical relationships.


The 12-Point AI Ethics & Data Privacy Checklist for Small SaaS
$29.00$12.00
See What’s Inside

References

  1. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Dartmouth College.

  2. Newell, A., & Simon, H. A. (1956). The Logic Theory Machine: A Complex Information Processing System. RAND Corporation. https://www.rand.org/pubs/papers/P868.html

  3. McDermott, J. (1982). R1: A Rule-Based Configurer of Computer Systems. Artificial Intelligence, 19(1), 39–88. https://doi.org/10.1016/0004-3702(82)90021-2

  4. Yu, V. L., et al. (1979). Antimicrobial Selection by a Computer: A Blinded Evaluation by Infectious Disease Experts. JAMA, 242(12), 1279–1282. https://doi.org/10.1001/jama.1979.03300120033020

  5. Haugeland, J. (1985). Artificial Intelligence: The Very Idea. MIT Press.

  6. Fikes, R. E., & Nilsson, N. J. (1971). STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving. Artificial Intelligence, 2(3–4), 189–208. https://doi.org/10.1016/0004-3702(71)90010-5

  7. McCarthy, J., & Hayes, P. J. (1969). Some Philosophical Problems from the Standpoint of Artificial Intelligence. Machine Intelligence, 4, 463–502.

  8. Gruber, T. R. (1993). A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition, 5(2), 199–220. https://doi.org/10.1006/knac.1993.1008

  9. Minsky, M. (1975). A Framework for Representing Knowledge. In P. Winston (Ed.), The Psychology of Computer Vision. McGraw-Hill.

  10. Lighthill, J. (1973). Artificial Intelligence: A General Survey. Science Research Council (UK). https://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm

  11. Winograd, T. (1972). Understanding Natural Language. Cognitive Psychology, 3(1), 1–191. https://doi.org/10.1016/0010-0285(72)90002-3

  12. Feigenbaum, E. A., Buchanan, B. G., & Lederberg, J. (1971). On Generality and Problem Solving: A Case Study Using the DENDRAL Program. Machine Intelligence, 6, 165–190.

  13. World Wide Web Consortium. (2012). OWL 2 Web Ontology Language Document Overview. W3C Recommendation. https://www.w3.org/TR/owl2-overview/

  14. Wikidata. (2024). Wikidata Statistics. Wikimedia Foundation. https://www.wikidata.org/wiki/Wikidata:Statistics

  15. AlphaProof and AlphaGeometry 2. (2024). AI achieves silver-medal standard solving International Mathematical Olympiad problems. DeepMind. https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/

  16. European Parliament. (2024). EU Artificial Intelligence Act. Official Journal of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689




 
 
bottom of page