The indoor mapping system
Full technical reference for the See U indoor positioning system: architecture, sensors, algorithms, APIs, and an implementation guide.
Quick navigation
Overview
Introduction to the mapping system
What is the See U indoor mapping system?
The See U indoor system is a closed-space positioning platform that fuses six sensor technology layers with advanced fusion to deliver accurate location where GPS is unavailable.
Unlike Wi‑Fi- or Bluetooth-only products, we use a multimodal stack that includes:
- Magnetic geolocation: A unique magnetic “fingerprint” for every building
- Wi‑Fi RTT: Time-of-flight ranging for accurate distances
- Bluetooth LE: Beacons for anchors and points of interest
- Inertial sensors: Dead reckoning with a high-end IMU
- Computer vision: SLAM and visual markers
- GNSS: Smooth indoor–outdoor transition
1–2 m accuracy in 99% of cases for safe wayfinding
Real-time updates with under 100 ms end-to-end latency
Fusing many technologies for reliability and precision
System architecture
Structure and major components
Architecture building blocks
1. Data collection layer
Streams raw data from all device sensors in real time.
- • Native drivers (iOS / Android)
- • Sample-rate management
- • Data buffering for batch fusion
- • Timestamp synchronization
2. Preprocessing layer
Filters, normalizes, and prepares data for fusion.
- • Low-pass & high-pass filters
- • Auto calibration
- • Outlier detection and removal
- • Normalization & scaling
3. Sensor fusion layer
Combines multiple sources with modern fusion methods.
- • Particle filter (probabilistic)
- • Extended Kalman for smoothing
- • Adaptive per-sensor weighting
- • Indoor / outdoor context detection
4. Positioning layer
Computes the final on-map user position.
- • Map matching
- • Snap to valid path segments
- • Floor / level detection
- • Drift correction
5. Navigation layer
Path planning and turn-by-turn instruction.
- • A* with accessibility weighting
- • Audio instruction generation
- • Dynamic re-routing
- • Proximity alerts to POIs
6. Interface layer
APIs and SDKs for app integration.
- • REST for web and servers
- • Native SDKs (iOS & Android)
- • WebSocket live updates
- • Webhooks for position events
Sensor layers
Technical details per layer
Magnetic geolocation
Maps Earth field and building-induced anomalies. Each structure has a unique magnetic fingerprint.
Technical specifications:
- Sample rate: 50–100 Hz
- Resolution: 0.01 µT
- Continuous autocalibration
- Interference compensation
Wi-Fi RTT
Time-of-flight Wi-Fi ranging to access points. Requires device and infrastructure RTT support.
Technical specifications:
- Protocol: IEEE 802.11mc (Wi-Fi RTT)
- Bands: 2.4 and 5 GHz
- Latency: < 100 ms
- At least 3 APs for triangulation
Bluetooth Low Energy (BLE)
Beacons for proximity, direction, POIs, and waypoints.
Technical specifications:
- Protocol: BLE 5.0+
- RSSI (received signal strength)
- Battery: 2–5 years
- Range: up to 100 m
Inertial measurement unit (IMU)
IMU covers motion, rotation, and altitude for bridging gaps between references.
Technical specifications:
- Accelerometer: ±16 g, 400 Hz
- Gyro: ±2000°/s, 400 Hz
- Baro: 0.5 m floor height
- Fusion: Kalman
Computer vision (CV)
Visually locates features, AR markers, QR, and text for position updates and SLAM.
Technical specifications:
- SLAM: simultaneous localization & mapping
- Feature detection: ORB, SIFT
- AR markers: ArUco, QR
- Runtime: TensorFlow Lite
GNSS / GPS (outdoor)
Satellite position outdoors and for indoor–outdoor handover and drift control.
Technical specifications:
- Multi-GNSS: GPS, GLONASS, Galileo, BeiDou
- A-GPS for quick TTFF
- Automatic indoor / outdoor
- Fused with other layers
Algorithms
Processing and intelligence
Particle filter (Monte Carlo)
Many weighted particles track the most likely user pose from multiple noisy sensors.
Extended Kalman filter
Recursively estimates a dynamic system with noisy inputs; used for smooth paths and short-term prediction.
A* pathfinding
Finds a shortest path with obstacles, accessibility, and user preferences as weights.
SLAM
Builds a map and tracks the device in it with visual features.
Machine learning (neural networks)
Trained models to classify environments, denoise data, and improve accuracy over time.
Dead reckoning
Propagate pose from last fix using velocity and heading when radio fixes drop.
Processing flow
Data capture
Sensors stream raw measurements in real time
Preprocessing
Filtering, calibration, and normalization
Sensor fusion
Particle filter merges all sources
Kalman filter
Smooths and predicts trajectory
Map matching
Project onto the routable map graph
Output pose
Exposed to the app over the API
REST API
Endpoints and integration
Base URL
/api/v1/positioningReturns the current device pose
Parameters:
device_idvenue_idfloor/api/v1/navigation/routePlan a path between two points
Parameters:
origindestinationaccessibility_mode/api/v1/venues/{id}/mapFetch venue map tiles
Parameters:
venue_idfloorformat/api/v1/poisList nearby points of interest
Parameters:
latlngradiuscategory/api/v1/tracking/updatePost sensor batch for server-side fusion
Parameters:
sensor_datatimestampdevice_idSample request
curl -X POST https://api.seeu.app/v1/navigation/route \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"origin": {
"lat": -23.550520,
"lng": -46.633308,
"floor": 1
},
"destination": {
"lat": -23.551234,
"lng": -46.634567,
"floor": 2
},
"accessibility_mode": true
}'Implementation guide
How to integrate the stack
1. System requirements
iOS
- • iOS 14.0+
- • Swift 5.5+
- • CoreLocation
- • CoreMotion
- • ARKit (optional)
Android
- • Android 8.0+ (API 26)
- • Kotlin 1.7+
- • Google Play services
- • Android sensor API
- • ARCore (optional)
2. SDK install
iOS (CocoaPods)
pod 'SeeUMapping', '~> 1.0'
Android (Gradle)
implementation 'app.seeu:mapping:1.0.0'
3. Initialization
// Swift
import SeeUMapping
let mapping = SeeUMapping(apiKey: "YOUR_API_KEY")
mapping.configure(
venueId: "shopping-center-xyz",
enableMagneticPositioning: true,
enableWiFiRTT: true,
enableBLE: true
)
mapping.startPositioning { result in
switch result {
case .success(let position):
print("Lat: \(position.lat), Lng: \(position.lng)")
case .failure(let error):
print("Error: \(error)")
}
}4. Navigation
mapping.calculateRoute(
from: currentPosition,
to: destination,
accessibilityMode: true
) { route in
mapping.startNavigation(route: route) { instruction in
// turn-by-turn instructions
print(instruction.text)
speakInstruction(instruction.audio)
}
}