DEV Community

Cover image for Building EigenAI: Teaching Math Foundations of AI Through Interactive Code
Luis Faria
Luis Faria

Posted on

Building EigenAI: Teaching Math Foundations of AI Through Interactive Code

From determinants to hill climbing algorithmsβ€”how I turned academic math into an interactive learning platform.

"Whether it's concrete or code, structure is everything."


πŸŽ“ The Challenge: Making Math "Click"

As a self-taught software engineer transitioning from 10+ years in project management, I enrolled in MFA501 – Mathematical Foundations of Artificial Intelligence at Torrens University Australia under Dr. James Vakilian. The subject covered everything from linear algebra to optimization algorithmsβ€”the mathematical backbone of modern AI applications in:

  • Machine Learning (model training, optimization)
  • Natural Language Processing (text embeddings, transformations)
  • Computer Vision (image processing, feature extraction)
  • Speech Recognition (signal processing, pattern matching)

But here's the problem: abstract math doesn't stick unless you build something with it.

So instead of just solving problems on paper, I built EigenAI β€” an interactive Streamlit app that teaches mathematical concepts through live computation, step-by-step explanations, and real-time visualizations.

Can we make eigenvalues, gradients, and hill climbing algorithms as intuitive as playing with Legos?

Lego Wallpaper

That question drove the entire project.


πŸ€– What Is EigenAI?

EigenAI taking a coffee getting ready to teach

EigenAI (playing on "eigenvalues" and "AI foundations") is a web-based educational platform that implements core mathematical concepts from AI foundations. It's structured around four assessments that progressively build complexity, with the app implementing the three case study assessments (2A, 2B, 3):

The 12-Week Journey

The subject covered 12 progressive modules:

Week Topic Overview
Weeks 1-5 Linear Algebra Foundations Sets, vectors, matrices, transformations, eigenvalues
Weeks 6-9 Calculus & Optimization Derivatives, integrals, hill climbing, simulated annealing, genetic algorithms
Weeks 10-12 Probability, Statistics & Logic Foundations for AI reasoning and decision-making

Note: Module 6 taught by Dr. Niusha Shafiabady


Assessment 1: Linear Algebra Fundamentals (Online Quiz)

  • βœ… Matrix operations (addition, multiplication, transpose)
  • βœ… Vector operations (magnitude, unit vectors, dot product, cross product)
  • βœ… Systems of equations (elimination, Gaussian elimination)
  • βœ… Linear transformations (stretching, reflection, projection)

The Challenge: 60-minute timed quiz covering Modules 1-2 foundational conceptsβ€”no coding, pure mathematical understanding.

Why It Matters: These fundamentals are the building blocks for understanding how data flows through neural networks and ML algorithms.

Note: Assessment 1 was a quiz-only assessment. The EigenAI app implements the three case study assessments (2A, 2B, 3) that required coding solutions.


Assessment 2A: Determinants & Eigenvalues (Case Study)

  • βœ… Recursive determinant calculation for nΓ—n matrices
  • βœ… Eigenvalue and eigenvector computation (2Γ—2 matrices)
  • βœ… Step-by-step mathematical notation using SymPy
  • βœ… Input validation and error handling

The Challenge: Implement cofactor expansion from scratchβ€”no NumPy allowed for core logic, only pure Python.

Why It Matters: Eigenvalues and eigenvectors are the foundation of:

  • PCA (Principal Component Analysis) β€” dimensionality reduction for large datasets
  • Eigenfaces β€” facial recognition algorithms
  • Feature compression β€” reducing computational cost in ML models

Understanding determinants reveals why singular matrices break these algorithms.


Assessment 2B: Calculus & Neural Networks (Case Study)

  • βœ… Numerical integration (Trapezoid, Simpson's Rule, Adaptive Simpson)
  • βœ… RRBF (Recurrent Radial Basis Function) gradient computation
  • βœ… Manual backpropagation without TensorFlow/PyTorch
  • βœ… Comparative analysis of integration methods with error bounds

The Challenge: Compute gradients by hand for a recurrent networkβ€”feel the chain rule in your bones.

Why It Matters: Before using model.fit(), you should understand what .backward() actually does.


Assessment 3: AI Optimization Algorithms (Case Study)

  • βœ… Hill Climbing algorithm for binary image reconstruction
  • βœ… Stochastic sampling variant (speed vs. accuracy trade-off)
  • βœ… Pattern complexity selector (simple vs. complex cost landscapes)
  • βœ… Real-time cost progression visualization

The Challenge: Reconstruct a 10Γ—10 binary image from random noise using only local searchβ€”no global optimization, no backtracking.

Why It Matters: Hill climbing is the foundation of gradient descent, simulated annealing, and evolutionary algorithms. If you understand local optima here, you understand why neural networks get stuck.

πŸ’‘ Key Insight from Module 6 (Dr. Niusha Shafiabady):

Hill climbing can get stuck in local optima with no guarantee of finding the global optimum. The cure?

  • Random restarts (try multiple starting points)
  • Random mutations (introduce noise)
  • Probabilistic acceptance (simulated annealing)

This limitation explains why modern AI uses ensemble methods and stochastic optimization.


πŸ—“οΈ Project Timeline & Results

Month Assessment Status
October 2025 Linear Algebra Quiz 72.5% (C)
October 2025 Determinants & Eigenvalues 82% (D)
November 2025 Integrals & RRBF 84% (D)
December 2025 Hill Climbing Awaiting results

Total Duration: 12 weeks of intensive mathematical foundations for AI


πŸ—οΈ Technical Architecture

Layer Technology Purpose
Frontend Streamlit Interactive UI with zero JavaScript
Core Logic Pure Python 3.10+ Type-hinted, no NumPy in algorithms
Math Rendering SymPy + matplotlib LaTeX-quality equations
Deployment Streamlit Cloud One-click deploy from GitHub
Version Control Git + GitHub Full project history since commit 1

Why Pure Python for Core Logic?

The assessment required implementing algorithms without numerical libraries to demonstrate understanding of the underlying math. This constraint forced me to:

  • Write cofactor expansion from scratch (not just np.linalg.det())
  • Implement Simpson's Rule manually (not just scipy.integrate.quad())
  • Build hill climbing with custom neighbor generation (not just scipy.optimize.minimize())

Result: Deep understanding of how these algorithms actually work under the hood.


πŸ—οΈ Key Features & Lessons Learned

1. Modular Architecture That Scales

eigenai/
β”œβ”€β”€ app.py                    # Main Streamlit entry point
β”œβ”€β”€ views/                    # UI components (one per assessment)
β”‚   β”œβ”€β”€ set1Problem1.py      # Determinants UI
β”‚   β”œβ”€β”€ set1Problem2.py      # Eigenvalues UI
β”‚   β”œβ”€β”€ set2Problem1.py      # Integration UI
β”‚   β”œβ”€β”€ set2Problem2.py      # RRBF UI
β”‚   └── set3Problem1.py      # Hill Climbing UI
└── resolvers/                # Pure Python algorithms
    β”œβ”€β”€ determinant.py
    β”œβ”€β”€ eigen_solver.py
    β”œβ”€β”€ integrals.py
    β”œβ”€β”€ rrbf.py
    β”œβ”€β”€ hill_climber.py
    └── constructor.py
Enter fullscreen mode Exit fullscreen mode

Lesson Learned: Separating algorithm logic from UI made testing 10x easier. When debugging the cost function, the UI stayed unchanged. When improving visualizations, the core math stayed untouched.

Iterative Development: EigenAI evolved through 23+ versions:

Version Milestone
v0.0.1 Streamlit setup, assets, pages
v0.1.0 βœ… Assessment 2A submission
v0.1.8 Added Hill Climbing Binary Image Reconstruction
v0.2.0 βœ… Assessment 2B submission (Integration + RRBF)
v0.2.4 Added stochastic sampling to Hill Climber
v0.2.6 Added complex pattern selector
v0.3.0 βœ… Assessment 3 submission (Hill Climbing Algorithm)

Each assessment pushed the app forwardβ€”turning coursework into production-ready features. Detailed CHANGELOG.md


2. Hill Climbing: When "Good Enough" Is Good Enough

The most fascinating part was implementing Hill Climbing for image reconstruction:

The Problem:

  • Start with a random 10Γ—10 binary image (noise)
  • Target: A circle pattern (100 pixels to match)
  • Cost function: Hamming distance (count mismatched pixels)
  • Neighborhood: Flip one pixel at a time (100 neighbors per state)

The Algorithm:

while cost > 0 and iterations < max_iterations:
    neighbors = generate_all_100_neighbors(current_state)
    best_neighbor = min(neighbors, key=cost_function)

    if cost(best_neighbor) >= cost(current_state):
        break  # Stuck at local optimum

    current_state = best_neighbor
    iterations += 1
Enter fullscreen mode Exit fullscreen mode

Results:

  • Simple pattern (circle): 100% success rate, avg 147 iterations
  • Complex pattern (checkerboard): 85% success rate, gets stuck in local optima
  • Stochastic sampling (50 neighbors): 95% success, 2x faster

The Insight: Hill climbing works beautifully on smooth cost landscapes but fails on complex ones.

This limitation explains why modern AI uses:

  • Simulated annealing β€” allows temporary cost increases (probabilistic acceptance)
  • Genetic algorithms β€” explores multiple paths simultaneously (population-based)
  • Gradient descent with momentum β€” escapes shallow local minima (velocity-based)

3. Stochastic Sampling: The Speed vs. Accuracy Trade-Off

One enhancement I added beyond requirements was stochastic hill climbing:

Instead of evaluating all 100 neighbors, randomly sample 50.

Trade-offs:

  • ⚑ Speed: 2x faster per iteration
  • ⚠️ Accuracy: May miss optimal move 5% of the time
  • πŸ“Š Final cost: Avg 0.5 pixels worse than full evaluation

Real-world application: When you have 10,000 neighbors (e.g., 100Γ—100 image), evaluating all is impractical. Stochastic sampling becomes mandatory.


KPIs

For the hill climbing implementation, I tracked:

Metric Simple Pattern Complex Pattern
Initial Cost ~50 mismatched pixels ~50 mismatched pixels
Final Cost 0 (perfect) 0-8 (may get stuck)
Iterations ~147 ~500 (hits plateau limit)
Time <0.03s <0.2s
Neighbors Evaluated ~14,700 ~50,000
Success Rate 100% 85%

Key Takeaway: Problem structure matters more than algorithm sophistication. A simple greedy search beats complex methods on convex problems.


πŸ’₯ Insights

This project transformed my understanding of AI math:

Before After
"Eigenvalues are Ξ» where det(A - Ξ»I) = 0" (memorized formula) Built cofactor expansion recursively, saw how determinants break down
"Gradient descent minimizes loss" (vague intuition) Computed RRBF gradients by hand, felt the chain rule propagate
"Hill climbing gets stuck in local optima" (heard in lectures) Watched hill climbing fail on checkerboards, understood why cost landscape matters

This transformation from abstract concepts to concrete understanding has fundamentally changed how I approach AI problems: I now see the math not as a collection of formulas, but as a toolkit of interconnected ideas that I can manipulate and reason about directly.

The hands-on experience has given me a deep, intuitive grasp of the mathematical foundations that underpin modern AI, enabling me to approach complex problems with both confidence and clarity, and to think about optimization and machine learning as algorithms to apply and mathematical principles that I can understand and leverage in practice.


❓ What's Next for EigenAI?

Module 6 introduced three optimization paradigms:

  • βœ… Hill Climbing (implemented in Assessment 3)
  • πŸ• Simulated Annealing (probabilistic escape from local optima)
  • πŸ• Genetic Algorithms (population-based evolutionary search)

Upcoming v0.4.X+ features:

Enhanced Optimization Suite:

  • Simulated Annealing comparison (temperature schedules, acceptance probability)
  • Genetic Algorithm variant (crossover, mutation, selection operators)
  • A* Search for pathfinding (admissible heuristics)
  • Q-Learning demo (reinforcement learning basics)

Platform Enhancements:

  • Authentication β€” user login and progress tracking
  • LLM Integration β€” GPT-4 powered step-by-step explanations with rate limiting
  • Custom Agent Framework β€” Built from the ground-up using knowledge graphs and reasoning for problem-solving
  • Supabase BaaS β€” cloud storage for user data and solutions
  • Backend Framework β€” FastAPI or Flask for RESTful API
  • Weekly Digest β€” agentic integration for learning analytics
  • Test Coverage β€” comprehensive unit testing with pytest
  • Security Enhancements β€” input sanitization, HTTPS enforcement

Try It Out

If you want to explore EigenAI:


Let's Connect!

Building EigenAI has been the perfect bridge between mathematical theory and practical software engineering. If you're:

  • Learning AI/ML foundations
  • Building educational tools
  • Passionate about making math accessible
  • Interested in optimization algorithms

I’d love to connect:


References & Further Reading

Academic Sources:

  • Strang, G. (2016). Introduction to linear algebra (5th ed.). Wellesley-Cambridge Press.
  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
  • Nocedal, J., & Wright, S. (2006). Numerical optimization. Springer.

Project Tech:


Tags: #machinelearning #python #streamlit #ai #mathematics #optimization #hillclimbing #education


Built with β˜• and calculus by Luis Faria

Student @ Torrens University Australia | MFA501 | Dec 2025

Top comments (0)