Skip to content

Conversation

@nhuet
Copy link

@nhuet nhuet commented Jan 19, 2026

Context
For evolutions based on positive metrics to minimize (like a cost), we want to be able to assign a negative combined_score (=-cost) to programs and assign -inf to programs not running properly so that they are always ranked worse than others (especially if we do not now a bound on the cost). Note that if we do not assign any metrics to failing programs, the database will later return a fitness of 0 when requested (e.g. when ranking top programs) and thus see the failing program as an improved program (which is obviously not what we would like).
This works well during program evolution, but when using the (very nice) visualizer, loading data is failing.
More precisely the checkpoint is loaded in python code properly and then sent to the javascript via a Response object decoded with resp.json() in fetchAndRender() from "main.js". This is crashing if it does not respect fully json specs (and NaN, Infinity are not json valid even though js objects).
image

Solution proposed
In this PR, we replace -inf, +inf, and nan values in programs metrics by None before visualizing. Thanks to that:

  • The data import does not crash anymore
  • The failing programs are clustered in the NaN box in the "performance" tab which seems to be the proper way to visualize them.
image

@CLAassistant
Copy link

CLAassistant commented Jan 19, 2026

CLA assistant check
All committers have signed the CLA.

When the visualizer import data from a checkpoint, this is sent to the
javascript via a response object decoded with `resp.json()` in
`fetchAndRender()` from "main.js".
 This is crashing if it does not respect fully json specs
(and NaN, Infinity are not json valid even though js objects).

This is useful for evolutions based on positive metrics to minimize (like a cost).
In that case, we want to put -metric in combined_score (which will then
be negative).
Thus an evolved program not working should be given a worse score during
evaluation. An easy way to do it is to put -inf (instead of not outputing any
metric, which will be replaced by a 0 by default by the database when
requesting a fitness).
Doing so works well during evolution (ranking the top programs as
expected), but during visualization, it was raising an error when fetching data.
@nhuet nhuet force-pushed the sanitize_inf_for_visu branch from 026d48f to 6d08400 Compare January 19, 2026 09:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants