JavaScript Logic

Here’s the clean mental model.

This file is not really “UI code.” It is a streaming data ingestion pipeline with UI rendering bolted onto it. The core pattern is:

raw SSE event → parse CSV into a data object → selectively push values into DOM spans → separately push some values into plot arrays → separately update echos/toggles/status widgets.

That high-level description is actually stated right in the file header: CSVData1/2/3 are parsed into a central data object, then updateFields() uses whitelist arrays like otherFields to decide what gets rendered.

  1. The actual incoming streams

There are four streams that matter for tracing:

CSVData CSVData2 CSVData3 TimestampData

There is also a console SSE event, but that is just appending lines to the on-screen console and is not part of the telemetry rendering pipeline. The listeners are attached after initializeEventSource() succeeds. The file also stores handleCSVData in window._csvDataHandler so demo mode can fake-feed the same path.

  1. The one sentence summary of each payload

CSVData is the fast live stream. It carries the real-time values that drive the main dashboard and the plots: amps, volts, RPM, duty, PID values, field status, mode, and some IMU live values. The handler expects exactly 50 values.

CSVData2 is the large “state / settings / counters / max values / weather / lifetime / mode flags / IMU stats” payload. It expects 219 values. It is much more like “everything that does not need the fastest rate.”

CSVData3 is mostly the learning-table / thermal-PID / advanced-learning / memory-status payload. It expects 218 values and has its own local updateFields() and whitelist.

TimestampData is not values. It is ages in milliseconds for sensors. It feeds window.sensorAges, which is then used only for staleness styling. It does not update actual readings.

  1. What happens when CSVData arrives

This is the main path to understand first.

When CSVData arrives, the handler does this:

First, it updates lastEventTime and flips the inline connection indicator to connected. Then it parses the comma-separated string into a numeric values[] array. Then it checks the array length and bails out if it is not exactly 50. Then it maps array indices into a data object with named keys like MeasuredAmps, BatteryV, RPM, dutyCycle, setpointLimited, pidInput, pidOutput, IMU fields, and plot-control fields like webgaugesinterval, plotTimeWindow, and Y-axis limits.

That data object is the important breakpoint. After this point, almost nothing should care about raw CSV position anymore.

Then the handler branches into several downstream consumers:

It updates plot configuration occasionally using updatePlotConfiguration(data), based on a counter and webgaugesinterval. That means plot axis / window changes are not rechecked every packet, only on a calculated cadence.

It updates special status widgets immediately, like field state text/class and charge-stage visibility. That logic is custom, not part of generic updateFields().

It defines a local updateFields(fieldArray) that knows how to scale values for this payload. This matters: there is not one universal formatter for the whole file. Each payload has its own local formatter rules. For CSVData, examples are /100 for volts/current/PID values, /10 for iiout, /1000000 for MaximumLoopTime, plus special label lookup for currentMode and currentPartitionType.

It splits fields into two buckets: criticalFields, which are updated every cycle, and otherFields, which are updated only every 4th cycle. That is one of the biggest reasons the file feels confusing: not everything in the same payload is rendered at the same rate.

Then it runs: updateFields(criticalFields); processCSVDataOptimized(data); and every 4th cycle: updateFields(otherFields); updateAllEchosOptimized(data); updateTogglesFromData(data);

That means the live path is really three parallel sinks:

DOM text updates for critical values, plot-array updates for graphs, and slower DOM/echo/toggle updates every 4th packet.

  1. What processCSVDataOptimized(data) actually does

This function is plot plumbing, not general UI plumbing.

It takes the named data object and pushes selected values into plot data arrays: currentTempData, voltageData, rpmData, temperatureData, and pidTuningData if present. It shifts old values left, inserts the newest value at the end, then queues plot redraws with queuePlotUpdate().

So if you are asking “where does incoming CSV become graph data?”, this is the answer: CSVData → data object → processCSVDataOptimized(data) → plot arrays → queued redraw.

That is a separate lane from text rendering.

  1. What happens when CSVData2 arrives

CSVData2 does the same broad pattern, but for a different class of values.

It parses 219 numbers into a huge data object full of maxes, SOC, run times, energy counters, weather/GPS, toggles, alarm settings, cloud flags, life metrics, firmware info, device ID, buffer counts, wind/nav data, CPU loads, forced-update status, and many IMU statistical values.

Then it defines another local updateFields(fieldArray) with its own formatting rules. This formatter is much fatter because CSVData2 contains many kinds of units. It handles cases like: reset-reason lookup, minutes-to-D/H/M formatting, GPS /1000000, speed /100, energy /1000, times in ms to seconds or minutes, fuel conversions, runtime formatting to hh:mm:ss, and many others.

Then it builds a giant otherFields whitelist and runs updateFields(otherFields). After that it also runs updateAllEchosOptimized(data) and updateTogglesFromData(data).

So for CSVData2, the data path is mostly: CSVData2 → data object → payload-specific formatter → whitelist-driven DOM updates + echos/toggles.

It is less about fast live graphing and more about state/config/lifetime/status.

  1. What happens when CSVData3 arrives

CSVData3 is another independent lane.

It parses 218 values into a data object for learning-table state, overheat/safe-hours data, learning counters, PID tuning settings, thermal PID params, and memory stats.

Then it defines yet another local updateFields() with formatting rules specific to this payload. It converts things like: ms to seconds for some timers, /100 for PID/setpoint/resistance-style fields, safe-hours special conversion, and default integer rendering.

Then it uses another whitelist otherFields and runs updateFields(otherFields). It also updates PID tuning plot configuration, learning-table inputs, fuel-table inputs, HP sync, learning highlights, glyphs, PID initialized display, and finally echos and toggles.

So CSVData3 is basically: advanced state payload → advanced displays/tables/PID config.

  1. TimestampData is a side channel, not a value payload

This one is easy to misunderstand.

TimestampData does not update readings like volts or amps. It only updates window.sensorAges with how old each reading is in milliseconds. The keys include heading, latitude, longitude, satellite count, Victron values, alternator temp, thermistor temp, rpm, measured amps, battery voltage, IBV, battery current, channel3V, duty, field volts, and field amps.

Later, updateAllStalenessStyles() reads window.sensorAges and applies stale styling to specific DOM elements. That is why stale graying is decoupled from value rendering.

That top-of-file comment also matters: the timestamp payload sends every 3 seconds, and temp sensors read every 5 seconds, which is why stale thresholds are set looser than you might expect.

  1. Frequency map

This is the part you asked for that matters most.

CSVData This is the fastest important payload. The file explicitly says IMU live fields in it are on a 150 ms update rate. Also, webgaugesinterval is carried inside this payload and used for plot timing/config logic, so the actual base stream cadence is device-controlled, not hardcoded in JS.

Within CSVData, not all UI updates happen equally often. Critical fields update every cycle. Plot arrays update every cycle. otherFields, echos, and toggles update only every 4th cycle.

CSVData2 The JS does not clearly hardcode its interval. It looks server-paced. In this file, all the listed otherFields for CSVData2 are updated every time the payload arrives.

CSVData3 Same story: no obvious hardcoded interval in JS; it is server-paced. Its whitelist update runs every time the payload arrives. The comment explicitly says “no throttling since server controls timing.”

TimestampData Top comment says this payload sends every 3 seconds.

Staleness watchdog A timer checks every 2 seconds whether lastEventTime is older than 9 seconds, and if so it marks the UI disconnected/stale.

Staleness-style refresh updateAllStalenessStyles() runs every 2 seconds.

So the practical result is: fast live values come through CSVData, visual stale-graying comes through TimestampData, and most heavyweight state/settings come through CSVData2/3.

  1. Why this feels so hard to follow

Because the same pattern is repeated three times, but not centralized.

You do not have one global updateFields().

You have one local formatter for CSVData, one local formatter for CSVData2, and one local formatter for CSVData3. Each has its own scaling rules. That means when a variable looks wrong on screen, you first have to know which payload owns it before you can even find the right formatter.

Also, the file has at least four different update mechanisms: plain updateFields(...) DOM text updates, plot-array updates in processCSVDataOptimized(...), echo updates in updateAllEchosOptimized(data), and toggle/status/widget-specific hand-written logic.

That is why tracing “one variable” is annoying. A value can be rendered by one of several mechanisms.

  1. The trace recipe for any one variable

Use this exact sequence.

First, ask: which payload owns it? If it is something live like RPM, amps, volts, duty, PID input/output, it is probably CSVData. If it is a counter, max, setting echo, GPS/weather, lifetime value, toggle, or device meta, it is probably CSVData2. If it is learning-table, safe-hours, overheat stats, or thermal PID config, it is probably CSVData3.

Second, find that variable in the const data = { ... } map for that payload.

Third, see whether it appears in: that payload’s criticalFields or otherFields, processCSVDataOptimized(data), updateAllEchosOptimized(data), or some custom one-off logic.

Fourth, check the payload-specific formatter rules for unit conversion.

That is the shortest honest way to debug this file.

  1. The most important concrete example: RPM

Here is RPM end to end.

CSVData maps RPM: values[4].

Then criticalFields maps "RPMID" and "RPMID2" to "RPM" so those DOM elements update every cycle.

Then otherFields maps "header-rpm" to "RPM" so the header RPM updates only every 4th cycle, not every cycle.

Then processCSVDataOptimized(data) also pushes RPM into rpmData for the plot every cycle.

Then stale styling later uses window.sensorAges.rpm to gray out both the main RPM and header RPM if the timestamp stream says the reading is old.

That one variable alone already touches: payload map, critical text rendering, slower header rendering, plot array, and staleness styling.

That is why the file feels like a monolith.

  1. My blunt assessment

The architecture is not random, but it is fragmented.

The good part is that almost everything does follow a repeatable pattern: payload → data object → whitelist/render/update helpers.

The bad part is that the same pattern is re-implemented three times with separate local formatters, plus a fourth side-channel for staleness, plus custom widgets sprinkled around. So the file has structure, but it is not centralized structure.

If you want to stop being lost, stop reading it top-to-bottom and start reading it as four pipelines.

Pipeline A: CSVData live dashboard + plots. Pipeline B: CSVData2 settings/state/max/lifetime/weather. Pipeline C: CSVData3 learning/PID/advanced tables. Pipeline D: TimestampData stale ages only.