taynan.dev
Back to writing
5 min read

Building an Enterprise AI Support System in 2017 — Before It Was the Obvious Move

In 2017, before LLMs made enterprise AI mainstream, I built a multi-channel chatbot that automated L1 support at scale. Here's the architecture, the results, and what it looks like rebuilt with today's tools.

aimachine-learningchatbotsenterprisedistributed-systems

In 2017, enterprise AI adoption was still nascent. Large language models didn't exist in deployable form. Most organizations were still debating whether AI had a place in operational workflows at all. That year, while working at TOTVS — Latin America's largest enterprise software company — I built a multi-channel chatbot system that took first place in an internal hackathon, automating Level 1 support for an enterprise environment, reducing support load, accelerating response times, and demonstrating a pattern that the rest of the industry would spend the next several years catching up to.

This is not a retrospective framed by hindsight. The architectural decisions made in 2017 — multi-channel integration, intent classification with confidence-based escalation, human-in-the-loop fallback, and knowledge base consumption — map directly to what practitioners now call RAG (Retrieval-Augmented Generation) pipelines. The convergence is not coincidental; it reflects a consistent engineering logic that shows up regardless of the generation of AI tooling available.

The Problem: Inefficiency at Scale

Enterprise support systems tend to accumulate inefficiencies over time:

  • High volume of repetitive inquiries
  • Increasing response times
  • Rising operational costs
  • Skilled professionals handling low-complexity tasks

L1 support, in particular, is dominated by predictable and knowledge-based interactions — making it a strong candidate for automation.

The Approach: AI-Powered Multi-Channel Support

The solution was designed around a simple principle: deliver accurate answers instantly, using existing communication channels.

Multi-Platform Integration

The chatbot was deployed across:

  • Skype
  • Facebook Messenger
  • WhatsApp

This reduced friction and allowed users to interact with the system in familiar environments.

Architecture Overview

The system was structured to balance automation with reliability:

Message Abstraction Layer

Unified inputs from different messaging platforms.

Intent Recognition

Given the limitations at the time, the system relied on:

  • Intent classification
  • Keyword extraction
  • Confidence scoring

Knowledge Base Integration

The chatbot consumed internal documentation to generate responses aligned with enterprise standards.

Decision Layer

  • High confidence → automated response
  • Low confidence → escalation to human support

This ensured consistent performance without compromising accuracy.

Observed Impact

Even as a hackathon project, the solution demonstrated clear operational benefits:

  • Reduction in L1 support demand
  • Faster response cycles
  • Improved focus of technical teams on higher-complexity issues

In similar enterprise scenarios, this approach can typically:

  • Reduce support costs by 30–60%
  • Improve response times by up to 10x
  • Increase overall efficiency in support operations

Alignment with Industry Evolution

At the time this solution was created, enterprise AI platforms were still emerging.

In the following years, major platforms demonstrated how data combined with machine learning could enable intelligent automation at scale. While developed independently, this project followed similar foundational principles:

  • Centralized knowledge consumption
  • Machine learning-driven interaction
  • Multi-channel communication
  • Automation of repetitive workflows

This convergence highlights how early implementations often anticipate broader industry patterns.

Context matters here. TOTVS Carol — TOTVS's enterprise AI and data platform — was officially launched on June 13, 2017, the same year as this hackathon project. The chatbot solution built here was subsequently integrated into Carol, becoming part of the platform's conversational AI capabilities. The Carol assistant represents the productized, enterprise-grade evolution of what this hackathon project demonstrated first — and the integration is the clearest signal that the approach was validated not just in theory, but by the product team responsible for TOTVS's AI strategy.

Design Principles That Made It Work

Focus on High-Frequency Interactions

Targeting common issues maximized impact with minimal complexity.

Integration Over Reinvention

Embedding the solution into existing tools avoided adoption barriers.

Human-in-the-Loop

Maintaining escalation paths preserved reliability and trust.

Constraints and Trade-offs

NLP Limitations

Without modern language models, intent recognition required careful tuning.

Data Quality

Inconsistent documentation required preprocessing and normalization.

Platform Variability

Different messaging platforms introduced integration overhead.

How This Would Be Built Today

Advances in AI would significantly evolve this architecture:

Large Language Models (LLMs)

Enable deeper contextual understanding and more natural interactions.

Retrieval-Augmented Generation (RAG)

Allow dynamic responses grounded in enterprise knowledge bases.

Continuous Observability

Enable monitoring, feedback loops, and iterative improvement.

Economic and Operational Impact

Improving efficiency in support operations has a direct effect on organizational performance.

AI-driven automation contributes to:

  • Increased productivity
  • Lower operational costs
  • Better allocation of specialized talent
  • Faster execution of digital initiatives

These factors become increasingly important as organizations scale and complexity grows.

Final Thoughts

This project demonstrated that even constrained AI systems can generate meaningful impact when applied to well-defined problems.

More broadly, it reinforced a recurring pattern in engineering: solutions that reduce friction in high-frequency workflows tend to scale quickly and deliver disproportionate value.


Why This Matters Beyond One Company

The US federal government and private sector are both under explicit pressure to accelerate AI adoption in operational workflows. Executive Order 14110 ("Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"), signed in October 2023, identifies AI-driven automation as a national strategic priority — calling on both federal agencies and the broader economy to deploy AI in ways that increase productivity, reduce operational costs, and improve service delivery.

What this 2017 project demonstrated — years before that EO existed — is that the core architectural logic for applying AI to enterprise operations is neither new nor experimental. Multi-channel integration, intent classification with escalation, knowledge base consumption, and human-in-the-loop fallback are the same primitives that modern LLM-based enterprise support systems use today, with better models underneath. The early execution of this pattern, before industry consensus had formed around it, is evidence of the kind of ahead-of-curve applied engineering that generates durable value.

For organizations still running human-only L1 support pipelines — which, in enterprise software, remains the majority — the business case is clear: 30–60% cost reduction, 10× faster response times, and reallocation of skilled technical staff from repetitive queries to work that requires genuine expertise. Documenting the architecture and lessons from this early implementation is a contribution to every team now evaluating how to apply AI to their own operational workflows.