We publish our methodology because trust requires transparency. This page explains what the score is, what it is not, and where it can be wrong — without giving away the technical secrets that make our scoring different from the next app.
1. What the score is built from
For every score, we draw on data from authoritative scientific and meteorological agencies, in broad categories:
Oceanographic data — satellite-derived sea-surface temperature, salinity, dissolved oxygen, mixed-layer depth, currents, and biological productivity from global oceanographic agencies.
Atmospheric and wave data — wind speed and direction, atmospheric pressure, wave height, and swell patterns from established meteorological models.
Bathymetry — sea-bed depth from global bathymetric surveys, used to infer underwater structure (reefs, drop-offs, seamounts).
Verified species records — recent fish and seabird observations from peer-reviewed scientific databases. Piscivorous seabirds in particular indicate prey concentrations.
Commercial activity indicators — vessel proximity as a proxy for active baitfish concentrations.
Astronomical factors — sun angle, moon phase, and tide cycle to estimate feeding windows.
2. How the score is computed
Two complementary approaches feed into every number you see:
A transparent heuristic model — a documented set of rules across water quality, weather, structure, biology, and time-of-day. You can see this breakdown in the app for any zone.
A machine-learning model trained exclusively on independently verified scientific observations spanning the planet. It learns which combinations of conditions correlate with confirmed fish presence and is continuously evaluated against held-out data and live feedback.
When the two approaches disagree, the app shows a calmer "AI is calibrating" message rather than a single confident number — your signal is noisy, and local knowledge wins.
3. What the score does NOT mean
It is not a guarantee you will catch fish. Good conditions are necessary but not sufficient — fish behaviour, luck, and skill all matter.
It does not account for legal status. Always check local regulations before fishing.
It does not incorporate user-submitted catches. We only train on independently verified scientific data to avoid feedback loops where popular spots reinforce their own scores.
In remote areas far from data-collection infrastructure, the score reflects general patterns only and confidence is reduced accordingly.
4. Live accuracy tracking
Inside the app, Settings → Model accuracy shows our live tracked success rate for HOT predictions, measured against catches that users opt to share with location enabled. We publish a regional accuracy number only after we cross a statistical-significance threshold for that region, so the figure is meaningful rather than noise.
5. Known limitations
Data gaps by region — coverage strength varies. Saltwater coverage is generally stronger than freshwater; some lakes rely on climatology rather than real-time sensors.
Inherent satellite delays — some oceanographic measurements have processing latency. Where we use them, scores prefer the freshest available signal.
Fishing regulations — the regulations panel is informational only and may be incomplete for some jurisdictions. Always verify with the local authority.
6. Why we don't train on user catches
A fishing app trained on popularity votes learns to reward viral spots, not good fishing. Popular spots attract more anglers, who log more catches, which boosts the score, which attracts more anglers — regardless of actual conditions. That's a feedback loop, not a model.
We chose the harder path: grounding predictions in peer-reviewed science rather than crowd-sourced enthusiasm. That means slower scaling and more conservative validation. It also means the score reflects conditions, not hype.
User-logged catches remain visible in the app as a reference layer (the BiteMap) — they inform your decisions, but they don't train the model.