Autonomous Terrain & Hazard Exploration Navigation Agent
ATHENA is a real-time 3D rover autonomy simulator with multi-algorithm pathfinding, infinite procedural terrain, and multi-planetary deployment.
The entire system runs in-browser. No backend required.
Full demonstration: multi-algorithm pathfinding, manual driving, terrain controls, and planetary environment switching.
A rover is placed on an infinite procedurally generated planetary surface. The operator clicks anywhere on the terrain and the rover plans a path using one of four selectable algorithms, visualizing the search in real-time.
The terrain extends infinitely in every direction with no boundaries. Slopes, craters, and rocks are all generated deterministically from seeded noise.
The operator can switch between Mars, Venus, Europa, and Titan. Each planet rebuilds the entire terrain, sky, lighting, fog, and surface colors to match its real physical conditions.
ATHENA implements four search algorithms. Each uses the same slope-based cost function but explores the search space in fundamentally different ways. The operator selects an algorithm, clicks a target, and watches the search expand in real-time with a step-by-step visualizer.
Explores by estimated total cost: actual cost traveled (g) plus a heuristic estimate to the goal (h).
Produces a focused beam that expands toward the target, exploring far fewer nodes than uninformed search. The heuristic is Euclidean distance.
A* without the heuristic. Explores by actual cost only, producing a uniform circular flood outward from the rover.
Guaranteed to find the optimal path, but explores significantly more nodes than A*. Where A* expands in a narrow beam, Dijkstra floods outward in concentric rings.
Instead of expanding on a grid, RRT randomly samples points in the world, finds the nearest existing tree node, and extends a branch toward the sample.
A 15% goal bias ensures the tree grows toward the target. Visualization is a branching tree structure.
Searches backward from the goal to the rover. The expansion wave originates at the target and floods toward the rover's position.
Designed for replanning: when the environment changes mid-traverse, only the affected portion needs to be recomputed. Natural choice for dynamic environments.
All four algorithms implement the same stepper interface: step(), getClosedPositions(), getOpenPositions(). The visualization layer doesn't know or care which algorithm is running. Swap the algorithm, the viz just works.
The terrain is infinite. There are no edges, no loading screens, no prebuilt maps. The rover can drive in any direction forever.
The world is divided into 80-unit chunks. A 7×7 grid of chunks (49 total) is maintained around the rover.
As the rover moves, chunks behind unload and new chunks ahead generate. Generation and disposal happen every 0.5 seconds.
3-layer fBm noise at different frequencies blended together, sampled from independent 256×256 seeded noise textures, plus a ridge noise layer for sharp geological features.
One continuous world-space function.
Deterministic craters placed using spatial cell hashing. The world is divided into 50-unit cells, each cell's hash determines its craters.
Crater geometry: parabolic depression with raised rim.
getWorldHeight()
Every height query, whether for terrain mesh vertices, rover ground contact, pathfinding cost, or the engineering viewport, calls the same getWorldHeight() function. There is one source of truth.
Toggling the TERRAIN overlay renders a continuous slope gradient across the entire visible terrain. Every vertex is scored by its slope angle and colored on a smooth ramp.
Pathfinding, hazard classification, terrain analysis, and traversability scoring all derive from the same getWorldSlope() function.
One physical quantity drives all navigation intelligence.
The operator can plan multi-stop missions with amber beacons dropped across the terrain.
The system plans each leg sequentially using the selected algorithm. Paths are concatenated into a single continuous route. The rover traverses the full mission automatically.
If any leg is blocked by impassable terrain, the mission reports the failure before the rover moves.
ATHENA deploys to four planetary bodies. Each changes the entire simulation: terrain chunks rebuild with new height scales, surface and rock colors, crater density, roughness, sky gradients, fog density, sun color and intensity, and dust particle colors.
THE RED PLANET
THE HELLSCAPE
JUPITER'S ICE WORLD
SATURN'S GIANT MOON
Each planet is a data object, not a separate code path. Mars and Europa run identical terrain generation logic with different parameter values. Adding a new planet is adding one object to a dictionary.
A separate Three.js renderer draws a detailed 3D rover model with orbit and zoom controls.
The viewport has independent camera controls. The operator can orbit and zoom the engineering view without affecting the main simulation camera.
ATHENA is the navigation layer of a modular robotics ecosystem. The path-planning intelligence developed for planetary rovers generalizes to any mobile robot navigating unknown terrain.
ATHENA navigates. BROTEUS sees. ORION decides. CHIRON moves. DAEDALUS calibrates.