1999: Opportunity for Europe (Patent ES2374881T3 – “Finder technology: Simple. Precise. Hallucination-free.”)


The Finder World – „Why Complicate When It Can Be Smart?“

We wrote history – with 1,000 categories against the data flood and AI hallucinations. As early as 1997, the concept of citythek.de was planned. Not as another expert playground, but as a reflection of the analog world with the Finder search engine, which was conceptually more advanced than today’s AI systems. It was based on my ten years of experience teaching adult illiterates (1985). My patent ES2374881T3 was the key: Instead of confronting users with unmanageable amounts of data or error-prone algorithms, I relied on assigning 1,000 precise categories. The Semantic Web, on the other hand, remained an ivory tower: RDF, DAML, OIL – the standards sounded like a secret language. Even tech enthusiasts despaired at the complexity.

Why this was better than anything that comes today:

  • Every Finder (token) was assigned to one or more of the 1,000 categories. This reduced the error rate to a minimum – because the AI only searched within clearly defined categories.
  • Users immediately saw the matching categories and could select the correct one with one click. The AI didn’t have to guess, but selected entries that were already stored in the corresponding category.
  • Minimal content, maximum efficiency: Instead of searching through endless amounts of data, the system worked with pre-structured, validated categories. The result? Faster answers, fewer errors, no distractions, less power consumption.

The Counter-Model to the Gatekeepers

While Google and Co. send users through labyrinths of advertising and distractions – like a store that deliberately builds aisles in front of the entrance to hang more posters –, I relied on directness and user control. My system didn’t need detours because it mapped human logic from the start.

The Consequences: A Search Engine That Could Have Changed Europe

  • No hallucinations, no data flood: Users found what they were looking for – without detours, without manipulation.
  • Value creation in Europe: Instead of giving data away to US corporations, there would have been a European infrastructure with the Finder technology – transparent, democratic, and with fair distribution of value creation.
  • The foundation for getmysense: A social network that empowers users instead of spying on them (see 2012).

View from 2026 – „When Europe Slept Through the Future“

How the Semantic Web failed in reality – and why we are still paying the price today. In 1999, everything could have turned out differently. But instead of relying on my precise, user-friendly classification, the world trusted abstract theories and greed for profit.

The three fatal mistakes of the digital economy in terms of a functioning society:

  1. Complexity instead of simplicity: The Semantic Web failed because of its own overload. My 1,000 categories would have been the salvation.
  2. Gatekeepers instead of user control: Google, Facebook & Co. built their empires on distraction and data exploitation. My model would have shown: It can be done without advertising labyrinths.
  3. No transfer of human structures: Today, even modern systems struggle with hallucinations – because they have no clear categories. My approach from 1999 was already further ahead.

The consequences:

  • Google & Co. dominate: Not because they are better, but because they keep users trapped in their systems.
  • Europe remains dependent: Instead of promoting Finder, US technologies were imported – and control over data and values was lost.
  • The irony: Today, corporations are desperately searching for solutions for „trustworthy AI“ – yet I already had it in 1999.
  • What remains? A question that still arises today: Why did Europe opt for complexity when there was a simple, better solution? (The answer follows – year by year, until 2045.)

GraTeach has become known beyond the region as a leadership academy with the Kamp-Lintfort basic conversations and the online magazine. Anyone who wants to engage with the many projects should look at the entire GraTeach.de timeline from 1990 to 2001 with the information behind the links.

The GAP 1999:

A GAP has not yet emerged. Google and Amazon Germany were only founded in 1998.

1985 – How It All Began


In this blog series, The Real Trillion Euro Gap, I compare two developments from 1999 to a preview of 2045:

  • A destructive misdevelopment of our society, shaped by short-term interests, and
  • A proactively designed digital future that preserves and evolves pre-digital achievements.

For decades, I have attempted to accompany a holistic concept for such a society. But the comparison shows:

A gap of trillions of euros has emerged—as economic damage and as the investment needed to rectify these misdevelopments.

This gap is no coincidence. It is the result of missed opportunities, ignored patents, and a digitization often dominated by autocratic business models.

Yet it is not just about numbers. It is about the question:

What could an inclusive, participatory society have looked like—and how can we still shape it?

A Pedagogical Milestone: The Segmenting Method (1985)

As early as 1985, Ingrid Daniels and I laid the foundation in our diploma thesis for a principle now known in AI as tokenization.

The Segmenting Method was a hybrid, participant-centered approach that broke down words into meaningful, recognizable units—not into letters, but into meaning-bearing segments.

Back then, it was about literacy. Today, this approach is relevant for AI, the Semantic Web, and inclusive education.

Even then, we spoke of tokens. (Excerpt from the teaching materials we created.)

Core Principles of the Segmenting Method

  • Segmentation instead of letter isolation:
    Words are broken down into recurring units such as “Haus-” (“house-”), “-tür” (“-door”), or “-licht” (“-light”).
    Example: “Hauslicht” (“house light”) → “Haus-” + “-licht” (analogous to “Tageslicht”/“daylight”).
    Goal: Rapid pattern recognition to accelerate reading and writing through association.
  • Contextual embedding:
    Segments are taught in everyday situations (e.g., “Where else do you find -licht?” → “Mondlicht”/“moonlight,” “Kerzenlicht”/“candlelight”).
    This promotes transferability and reduces cognitive load.
  • Participant orientation:
    The segments come from the learners’ own language—similar to the language experience approach.
    Learners identify patterns in self-created texts.
  • Visual support:
    Color coding or symbols anchor the segments.
    Example: All words with “-ung” (“-tion”/“-ing”) are marked in blue to highlight them as “noun-building blocks.”
  • Quick successes:
    Through frequent segments (e.g., “ge-”/“pre-”, “-en”/“-ing”), learners decode entire word families—without analyzing every letter.

Advantages—Then and Now

  • Efficiency: Faster learning success through pattern recognition.
  • Motivation: Learners unlock word families and see progress.

Comparison: Segmenting Method (1985) vs. Modern Reading Methods (2026)

Criterion Segmenting Method (1985) Modern Methods (2026)
Basic Approach Hybrid: Segments + holism Multimodal: Phonics, whole-word, morphemics + digital tools
Units Meaning-bearing segments (e.g., “-ung”) Morphemics (“word building blocks”) + syllable method
Technology Manual segmentation, later databases AI-supported platforms (e.g., “Antura,” “GraphoGame”)
Participant Orientation Everyday language, self-created texts Personalized learning via algorithms (e.g., “Duolingo ABC”)
Visual Aids Color coding, symbols Gamification (e.g., “Endless Alphabet”), augmented reality
Target Group Adult illiterates Inclusive approaches for all age groups
Scientific Basis Practical experience, linguistic intuition Neuroscience, long-term studies on reading fluency

Current Trends Confirming the Segmenting Method

  • Morphemic approaches are now standard (e.g., in German primary schools).
  • My 1999 idea (European Patent ES2374881T3):
    Using 1,000 core categories—similar to today’s “high-frequency word” lists.
  • AI-driven segmentation:
    Tools like “GraphoGame” adaptively adjust learning paths—a principle we advocated early.
  • Language experience + technology:
    Apps like “Speechify” convert speech to text and automatically mark segments.
  • Social context:
    Modern methods emphasize collaborative learning (e.g., “literacy cafés”)—exactly like our approach.

Critique of Modern Methods

  • Over-technologization: Some tools lose the human dialogue (à la Freire/Freinet).
  • Cultural blind spots: Data-driven segmentation often ignores local contexts.
  • Commercialization: Many apps are not freely accessible—our approach focused on open knowledge sharing.

Conclusion: Why This Approach Advances Society

The Segmenting Method was visionary because it:

  • Anticipated hybridity (now standard in pedagogy),
  • Emphasized participant orientation and contextualization (now rediscovered),
  • Showed how socially relevant research drives innovation—without autocratic business models.

This example illustrates a central concern of the series The Real Trillion Euro Gap:

Digitization is not an end in itself.

It must be designed to be inclusive, participatory, and democratic—just like pre-digital research.

Where we fail to pay attention, we risk a digital autocracy serving the interests of a few—rather than a society that makes technology usable for all.

The question is not whether we can shape the future. It is whether we want to.

Everyone must—and everyone can—contribute to a livable society.

Are you afraid of a blackboard? No. So why be afraid to judge digitization?

Just like a blackboard, it is a tool!