gtopt Python Scripts

The scripts/ directory contains Python command-line utilities for preparing, converting, visualizing, and post-processing data for use with gtopt.

Detailed documentation for each script is available in the docs/scripts/ directory. This page provides a summary with links to the full documentation.

Table of Contents

  • Installation
  • Data Preparation & Conversion
    • gtopt_diagram · full docs
    • plp2gtopt · full docs
    • pp2gtopt · full docs
    • gtopt2pp — convert gtopt JSON back to pandapower
    • igtopt · full docs · Excel template · igtopt --make-template regenerates the template
    • cvs2parquet · full docs
    • ts2gtopt · full docs
    • gtopt_compare · full docs
  • Running & Monitoring
    • run_gtopt — smart solver wrapper with pre/post-flight checks
    • sddp_monitor — live SDDP convergence dashboard
  • Validation & Diagnostics
    • gtopt_check_json — validate JSON planning files
    • gtopt_check_lp — diagnose infeasible LP files
    • gtopt_check_output — analyze solver output
    • gtopt_check_solvers — discover and validate LP solver plugins
    • gtopt_compress_lp — compress LP debug files
  • Utilities
    • gtopt_config — unified configuration for all tools
    • gtopt_field_extractor
  • Tool Comparison (gtopt vs PLP vs pandapower)
  • Using with gtopt\_guisrv and gtopt\_websrv

Installation

Install all tools with a single pip command from the repository root:

pip install ./scripts

This registers all 16 command-line tools on your PATH: gtopt_diagram, plp2gtopt, pp2gtopt, gtopt2pp, igtopt, cvs2parquet, ts2gtopt, gtopt_compare, run_gtopt, sddp_monitor, gtopt_check_json, gtopt_check_lp, gtopt_check_output, gtopt_check_solvers, gtopt_compress_lp, and gtopt_field_extractor. An editable install is useful during development:

pip install -e "./scripts[dev]"

Optional extras unlock additional output formats:

pip install -e "./scripts[dev,diagram]"
# adds: graphviz, pyvis, cairosvg

Package organization

Each command-line tool lives in its own Python package directory under scripts/:

Package directoryCommandDescription
gtopt_compare/gtopt_comparepandapower ↔ gtopt comparison
cvs2parquet/cvs2parquetCSV → Parquet converter
gtopt_diagram/gtopt_diagramNetwork topology / planning diagrams
gtopt_field_extractor/gtopt_field_extractorC++ header field metadata extractor
igtopt/igtoptExcel → gtopt JSON converter
plp2gtopt/plp2gtoptPLP → gtopt JSON converter
pp2gtopt/pp2gtoptpandapower → gtopt JSON converter
gtopt2pp/gtopt2ppgtopt JSON → pandapower converter
run_gtopt/run_gtoptSmart solver wrapper with pre/post-flight checks
gtopt_check_json/gtopt_check_jsonJSON planning file validator
gtopt_check_lp/gtopt_check_lpInfeasible LP file diagnostic tool
gtopt_check_output/gtopt_check_outputSolver output analyzer
gtopt_check_solvers/gtopt_check_solversLP solver plugin discovery & validation
gtopt_compress_lp/gtopt_compress_lpLP debug file compressor
gtopt_config/*(library)*Unified configuration management
sddp_monitor/sddp_monitorSDDP solver live monitoring dashboard
ts2gtopt/ts2gtoptTime-series → gtopt block schedule converter

Dependencies

PackagePurpose
numpyNumerical array processing
pandasDataFrame I/O
pyarrowParquet read/write
openpyxlExcel file support (igtopt)
pandapowerPower system network data (pp2gtopt, gtopt2pp, gtopt_compare)
richStyled terminal output (gtopt_check_json, plp2gtopt, igtopt, run_gtopt)
matplotlib *(optional)*live charts (sddp_monitor GUI mode)
graphviz *(optional)*SVG/PNG/PDF rendering (gtopt_diagram)
pyvis *(optional)*Interactive HTML diagrams (gtopt_diagram)
cairosvg *(optional)*High-res PNG/PDF export (gtopt_diagram)

gtopt_diagram

→ Full documentation

Generates network topology and planning-structure diagrams from a gtopt JSON planning file. Supports multiple output formats (SVG, PNG, PDF, DOT, Mermaid, interactive HTML) and automatic simplification of large cases via aggregation modes.

Basic usage

# Auto mode (default) – picks the best aggregation for your case size
gtopt_diagram cases/ieee_9b/ieee_9b.json -o ieee9b.svg

# Interactive HTML with physics simulation
gtopt_diagram cases/ieee_9b/ieee_9b.json --format html -o ieee9b.html

# Mermaid Markdown snippet (no extra dependencies)
gtopt_diagram cases/ieee_9b/ieee_9b.json --format mermaid

# Network-only: no generator nodes (clean topology view)
gtopt_diagram large_case.json --no-generators -o topo.svg

# Planning time-structure diagram
gtopt_diagram cases/c0/system_c0.json --diagram-type planning --format html

Output formats (<tt>--format</tt>)

FormatDescriptionRequires
svgScalable vector graphic (default)graphviz
pngRaster imagegraphviz + cairosvg
pdfPDF documentgraphviz + cairosvg
dotGraphviz DOT source— (no extra deps)
mermaidMermaid flowchart source— (no extra deps)
htmlInteractive vis.js browser diagrampyvis

Aggregation modes (<tt>--aggregate</tt>)

ModeWhen auto selects itDescription
auto(default)Chooses based on element count
none< 100 elementsEvery generator shown individually
bus100–999 elementsOne summary node per bus
type≥ 1000 elementsOne node per (bus, generator-type) pair
global— (manual only)One node per generator type, system-wide

Reducing large diagrams

# Keep only buses ≥ 220 kV (lump low-voltage buses into HV neighbours)
gtopt_diagram large_case.json --voltage-threshold 220 -o hv_topo.svg

# Show only hydro generators within 3 hops of a specific bus
gtopt_diagram large_case.json --filter-type hydro --focus-bus Chapo220 --focus-hops 3

# Hard node-count cap: escalate aggregation until ≤ 50 nodes remain
gtopt_diagram large_case.json --max-nodes 50 -o compact.svg

# Keep only the top-2 generators per bus by pmax
gtopt_diagram large_case.json --top-gens 2 -o top2.svg

Visual features

  • Voltage-based line coloring: transmission lines are drawn with color intensity and width proportional to the bus voltage level (higher voltage lines appear darker and wider).
  • Reservoir sizing: reservoir nodes are scaled by their storage capacity (emax), so larger reservoirs appear visually larger in the diagram.
  • Reserve zones and provisions: reserve_zone_array entries are rendered as zone nodes connected to the generators that provide reserves via reserve_provision_array.
  • Generator and demand profiles: generator_profile_array and demand_profile_array entries are rendered as profile nodes linked to their parent generator or demand.
  • Colorblind palette: use --palette colorblind to switch all element colors to a palette designed for color-vision deficiency accessibility.

File-referenced value resolution

When fields like pmax or lmax are stored as Parquet file references, gtopt_diagram resolves them using --scenario, --stage, and --block (all default to UID 1). This controls which row of the Parquet file is used for sizing and labeling diagram elements.

All options

positional arguments:
  json_file             gtopt JSON planning file

options:
  -t, --diagram-type    topology (default) or planning
  -f, --format          dot | png | svg | pdf | mermaid | html  (default: svg)
  -o, --output          output file path
  -s, --subsystem       full | electrical | hydro  (default: full)
  -L, --layout          dot | neato | fdp | sfdp | circo | twopi
  -d, --direction       LR | TD | BT | RL  (Mermaid direction, default: LR)
  --clusters            Group in Graphviz sub-clusters
  --palette             default | colorblind  (default: default)
  --scenario UID        Scenario UID for resolving file-referenced values (default: 1)
  --stage UID           Stage UID for resolving file-referenced values (default: 1)
  --block UID           Block UID for resolving file-referenced values (default: 1)

reduction options:
  -a, --aggregate       auto | none | bus | type | global  (default: auto)
  --no-generators       Omit all generator nodes
  -g, --top-gens N      Keep only top-N generators per bus by pmax
  --filter-type TYPE    Show only: hydro solar wind thermal battery
  --focus-bus BUS       Show only elements within N hops of BUS (repeatable)
  --focus-generator GEN Focus on bus(es) connected to these generators (by name or uid)
  --focus-area KV       Focus on buses at or above this voltage level (kV)
  --focus-hops N        Hops for --focus-bus (default: 2)
  --max-nodes N         Hard cap; escalate aggregation until ≤ N nodes
  -V, --voltage-threshold KV  Lump buses below KV into HV neighbours
  --hide-isolated       Remove unconnected nodes
  --compact             Omit detail labels (names/counts only)

plp2gtopt

→ Full documentation

Converts a PLP (Programación de Largo Plazo) case directory to the gtopt JSON + Parquet format. Reads the standard PLP data files (plpblo.dat, plpbar.dat, plpcosce.dat, plpcnfce.dat, plpcnfli.dat, plpdem.dat, plpeta.dat, and others) and writes:

  • A gtopt JSON file (<output-dir>.json) with the complete system, simulation, and options configuration.
  • Parquet time-series files organised in subdirectories under <output-dir>/ (e.g. Demand/lmax.parquet, Generator/pmin.parquet, Afluent/afluent.parquet).

Hydro system conversion

When plpcnfce.dat contains reservoir (embalse) or series (serie) centrals, plp2gtopt converts the cascaded hydro system into gtopt arrays. Two additional optional PLP files extend the hydro model:

PLP filegtopt arrayDescription
plpcnfce.datjunction_array, waterway_array, turbine_array, reservoir_array, flow_arrayMain hydro topology (required when hydro centrals exist)
plpcenre.datreservoir_efficiency_arrayVolume-dependent turbine efficiency (PLP rendimiento): piecewise-linear conversion rate as a function of reservoir storage
plpcenfi.datfiltration_arrayWaterway-to-reservoir seepage (PLP filtración): linear model flow = slope × volume + constant

Both plpcenre.dat and plpcenfi.dat are optional — if absent, the corresponding arrays are simply not written. When present and non-empty they are silently parsed and appended to the JSON output.

**plpcenre.dat format** (Archivo de Rendimiento de Embalses):

# Number of entries
N
# For each entry:
'CENTRAL_NAME'     ← turbine (central) name
'EMBALSE_NAME'     ← reservoir name
mean_efficiency    ← fallback efficiency [MW·s/m³]
num_segments       ← number of piecewise-linear segments
idx  volume  slope  constant  scale  ← one line per segment

**plpcenfi.dat format** (Archivo de Centrales Filtración):

# Number of entries
N
# For each entry:
'CENTRAL_NAME'     ← waterway source central name
'EMBALSE_NAME'     ← receiving reservoir name
slope  constant    ← seepage model [m³/s/dam³] and [m³/s]

Basic usage

# Default: reads ./input, writes ./output.json + ./output/
plp2gtopt

# Explicit input/output directories
plp2gtopt -i plp_case_dir -o gtopt_case_dir

# Limit conversion to the first 5 stages
plp2gtopt -i input/ -s 5

# Single hydrology (1-based, default)
plp2gtopt -i input/ -y 1

# Two hydrology scenarios (1-based) with 60/40 probability split
plp2gtopt -i input/ -y 1,2 -p 0.6,0.4

# Range selector: hydrologies 1, 2, and 5 through 10
plp2gtopt -i input/ -y 1,2,5-10

# Group PLP stages 1–4 into phase 1, then one stage per phase after
plp2gtopt -i input/ --stages-phase '1:4,5,6,7,8,9,10,...'

# Apply a 10% annual discount rate
plp2gtopt -i input/ -d 0.10

# Verbose debug output
plp2gtopt -i input/ -l DEBUG

Hydrology index format (<tt>-y</tt> / <tt>--hydrologies</tt>)

Hydrology indices follow the Fortran 1-based convention: index 1 refers to the first hydrology column in the PLP data files. The argument accepts comma-separated values and ranges:

SyntaxMeaning
1First hydrology only (default)
1,2Hydrologies 1 and 2
1,2,5-10Hydrologies 1, 2, and 5 through 10
1,2,5-10,11Hydrologies 1, 2, 5-10, and 11

The internal 0-based index stored in the output JSON ("hydrology" field in scenario_array) is automatically computed as input_index - 1.

Phase layout (<tt>--stages-phase</tt>)

By default, phase assignment is controlled by --solver:

  • sddp (default): one phase per PLP stage
  • mono / monolithic: one phase covering all stages

The --stages-phase option overrides this with an explicit mapping. Tokens are comma-separated and use 1-based PLP stage indices:

TokenMeaning
NSingle stage N as one phase
N:MStages N through M (inclusive) as one phase
...(trailing) auto-expand one stage per phase for remaining stages
# Stages 1-4 as phase 1, stages 5-10 each as their own phase,
# then one stage per phase for any remaining stages
plp2gtopt -i input/ --stages-phase '1:4,5,6,7,8,9,10,...'

# Group all stages into two phases: 1-12 and 13-24
plp2gtopt -i input/ --stages-phase '1:12,13:24'

Pasada (run-of-river) hydro modeling (<tt>--pasada-hydro</tt>)

By default (--pasada-hydro, enabled), PLP pasada (run-of-river) centrals are converted into the full gtopt hydro topology: a junction, waterway, turbine, and flow element for each central. This preserves hydrological connectivity and allows the solver to model water balance constraints.

Use --no-pasada-hydro to revert to the legacy behavior, where pasada centrals are modeled as generators with time-series profiles containing normalized capacity factors derived from the PLP afluent data.

Block-to-hour map (<tt>indhor.csv</tt>)

When the PLP input directory contains indhor.csv, plp2gtopt reads it and writes a normalised BlockHourMap/block_hour_map.parquet file alongside the other Parquet outputs. A "block_hour_map" key is added to the simulation section of the output JSON so that post-processing tools (e.g. ts2gtopt) can reconstruct hourly time-series from block-granularity solver output.

The indhor.csv format has columns: Año, Mes, Dia, Hora, Bloque (all integers; Hora is 1-based 1-24).

ZIP output (<tt>-z</tt> / <tt>--zip</tt>)

The -z flag creates a single ZIP archive that bundles the JSON configuration file and all Parquet/CSV data files together, preserving the full output directory structure. This archive is directly compatible with gtopt_guisrv (upload via the GUI) and gtopt_websrv (submit via the REST API):

plp2gtopt -z -i plp_case_2y -o gtopt_case_2y
# Produces: gtopt_case_2y.zip

The ZIP layout is:

gtopt_case_2y.zip
├── gtopt_case_2y.json          ← main system/simulation/options config
└── gtopt_case_2y/              ← input_directory (data files)
    ├── Demand/
    │   └── lmax.parquet
    ├── Generator/
    │   ├── pmin.parquet
    │   └── pmax.parquet
    └── Afluent/
        └── afluent.parquet

Note: the input_directory field in the JSON options matches the name of the subdirectory inside the ZIP, so gtopt_guisrv and gtopt_websrv can locate the data files without any extra configuration.

Conversion statistics

After a successful conversion, plp2gtopt logs statistics (at INFO level) similar to the pre-solve statistics printed by the gtopt solver:

=== System statistics ===
  System name     : plp2gtopt
=== System elements  ===
  Buses           : 2
  Generators      : 3
  Generator profs : 1
  Demands         : 2
  Lines           : 1
  Batteries       : 0
  Converters      : 0
  Junctions       : 2
  Waterways       : 1
  Reservoirs      : 1
  Turbines        : 1
=== Simulation statistics ===
  Blocks          : 8760
  Stages          : 5
  Scenarios       : 2
=== Key options ===
  use_single_bus  : False
  scale_objective : 1000
  demand_fail_cost: 1000
  input_directory : gtopt_case_2y
  annual_discount : 0.1
=== Conversion time ===
  Elapsed         : 1.234s

Use -l DEBUG to also see which individual .dat files are being parsed.

All options

FlagDefaultDescription
-i, --input-dir DIRinputPLP input directory
-o, --output-dir DIRoutputOutput directory for Parquet/CSV files
-f, --output-file FILE<output-dir>.jsonJSON output file path
-z, --zipoffCreate a ZIP archive of the JSON + data files
-s, --last-stage NallStop after stage N
-d, --discount-rate RATE0.0Annual discount rate (e.g. 0.10 for 10%)
-m, --management-factor F0.0Demand management factor
-t, --last-time TallStop at time T
-c, --compression ALGgzipParquet compression (gzip, snappy, brotli, none)
-y, --hydrologies H1[,H2,…]1Hydrology scenario indices (1-based Fortran convention; accepts ranges e.g. 1,2,5-10)
-p, --probability-factors P1[,P2,…]equalProbability weights per scenario
--stages-phase SPEC(solver default)Explicit phase layout; comma-separated stage indices/ranges with optional ... wildcard
--solver TYPEsddpSimulation structure: sddp (one phase/scene per stage/scenario) or mono/monolithic (single phase and scene)
--pasada-hydro / --no-pasada-hydroenabledModel pasada (run-of-river) centrals as full hydro topology (junction, waterway, turbine, flow) instead of generator profiles. Use --no-pasada-hydro for legacy behavior with generator profiles using normalized capacity factors
--stationary-tol TOL(auto)Secondary convergence tolerance for stationary-gap detection. When the relative change in the SDDP gap over the last --stationary-window iterations falls below this value, the solver declares convergence even if gap > convergence_tol. Default: convergence_tol / 10. Set to 0 to disable
--stationary-window N4Number of iterations to look back when checking gap stationarity. Only used when --stationary-tol is set
-l, --log-level LEVELINFOVerbosity (DEBUG, INFO, WARNING, ERROR)
-V, --versionPrint version and exit

Isolated centrals

During hydro topology conversion, PLP centrals that have no bus assignment (bus <= 0), no waterway connections, and are not referenced by any other central are considered isolated and are silently skipped. After the conversion statistics, a "Skipped Centrals" section lists all isolated centrals by name so the user can verify that they are genuinely unused.

Error messages

plp2gtopt raises descriptive errors for common problems:

SituationError message
Input directory missing‘Input directory does not exist: 'plp_case/’\ilinebr </td> </tr> <tr class="markdownTableRowEven"> <td class="markdownTableBodyNone"> Required.datfile missing \ilinebr </td> <td class="markdownTableBodyNone">Required file not found: …/plpblo.dat\ilinebr </td> </tr> <tr class="markdownTableRowOdd"> <td class="markdownTableBodyNone"> Invalid data format \ilinebr </td> <td class="markdownTableBodyNone">Invalid data format: …`

pp2gtopt

→ Full documentation

Converts a pandapower network to gtopt JSON format. Accepts either a built-in IEEE test network (via -n) or any pandapower network file saved to disk (via -f). Writes a self-contained gtopt JSON file ready to be solved directly with the gtopt binary or submitted via gtopt_guisrv / gtopt_websrv.

Basic usage

# Convert the default IEEE 30-bus built-in network → ieee30b.json
pp2gtopt

# Convert a saved pandapower JSON file
pp2gtopt -f my_network.json -o my_case.json

# Convert a MATPOWER case file
pp2gtopt -f case39.m -o case39.json

# Convert a pandapower Excel workbook
pp2gtopt -f network.xlsx -o network.json

# Use a specific built-in test network
pp2gtopt -n case14 -o ieee14b.json

# List all available built-in test networks
pp2gtopt --list-networks

Input from file (<tt>-f / --file</tt>)

-f FILE loads any pandapower network saved to disk. The format is auto-detected from the file extension:

ExtensionFormatProduced by
.jsonpandapower JSONpandapower.to_json()
.xlsx / .xlspandapower Excelpandapower.to_excel()
.mMATPOWER case fileMATPOWER / Octave

The output JSON system name is derived from the file stem (e.g. case39.m"case39").

Available built-in networks (<tt>-n / --network</tt>)

Network namepandapower functionDescription
ieee30b *(default)*case_ieee30IEEE 30-bus (Washington)
case4gscase4gs4-bus Glover-Sarma
case5case55-bus example
case6wwcase6ww6-bus Wood-Wollenberg
case9case9IEEE 9-bus
case14case14IEEE 14-bus
case33bwcase33bw33-bus Baran-Wu
case57case57IEEE 57-bus
case118case118IEEE 118-bus

All options

FlagDefaultDescription
-f, --file FILEpandapower network file (.json, .xlsx/.xls, .m)
-n, --network NAMEieee30bbuilt-in test network (mutually exclusive with -f)
-o, --output FILE<stem>.jsonOutput JSON file path
--list-networksPrint all available built-in network names and exit
-V, --versionPrint version and exit

igtopt

→ Full documentation · Excel template

Converts an Excel workbook to a gtopt JSON case. Reads all named sheets from the workbook and writes:

  • A gtopt JSON file with the complete system, simulation, and options configuration.
  • Parquet (or CSV) time-series files written to the input_directory for any sheet whose name contains @ (e.g. Demand@lmax<input_dir>/Demand/lmax.parquet).

Basic usage

# Basic conversion – output JSON and input directory derived from workbook name
igtopt system.xlsx

# Write output to an explicit JSON file
igtopt system.xlsx -j output/system.json

# Pretty-printed JSON (4-space indented), skip null/NaN values
igtopt system.xlsx --pretty --skip-nulls

# CSV time-series files instead of Parquet
igtopt system.xlsx -f csv

# Bundle JSON + data files into a ZIP archive (ready for gtopt_guisrv/websrv)
igtopt system.xlsx --zip

# Convert multiple workbooks in one run (merges into a single JSON)
igtopt case_a.xlsx case_b.xlsx -d /data/input

# Validate the workbook without writing output (exit 0 = OK, 1 = errors)
igtopt system.xlsx --validate

# Proceed even if some sheets have errors (partial output)
igtopt system.xlsx --ignore-errors

# Show debug log messages
igtopt system.xlsx -l DEBUG

Template generation (<tt>-T</tt> / <tt>--make-template</tt>)

The --make-template flag reads the gtopt C++ JSON header files (include/gtopt/json/) and generates a ready-to-use Excel template. Re-running after adding a new JSON element to the C++ source automatically produces an up-to-date workbook.

# Regenerate docs/templates/gtopt_template.xlsx (default output)
igtopt --make-template

# Write to a custom path (reuses the -j/--json-file flag)
igtopt --make-template -j /tmp/my_template.xlsx

# Print the sheet list that would be generated (no file written)
igtopt --make-template --list-sheets

# Custom C++ header directory
igtopt --make-template --header-dir /path/to/include/gtopt

ZIP output (<tt>-z</tt> / <tt>--zip</tt>)

The -z flag creates a single ZIP archive that bundles the JSON configuration file and all Parquet/CSV data files together, preserving the full directory structure. This archive is directly compatible with gtopt_guisrv (upload via the GUI) and gtopt_websrv (submit via the REST API):

igtopt system.xlsx --zip
# Produces: system.zip
#   system.zip
#   ├── system.json
#   └── system/
#       ├── Demand/
#       │   └── lmax.parquet
#       └── GeneratorProfile/
#           └── profile.parquet

Conversion statistics

After a successful conversion, igtopt logs statistics (at INFO level) similar to those printed by plp2gtopt:

=== System statistics ===
  Buses           : 57
  Generators      : 7
  Demands         : 42
  Lines           : 80
=== Simulation statistics ===
  Blocks          : 1
  Stages          : 1
  Scenarios       : 1
=== Key options ===
  use_single_bus  : False
  scale_objective : 1000
  demand_fail_cost: 1000
  input_directory : system
=== Conversion time ===
  Elapsed         : 0.123s

Excel workbook format

The workbook can contain any of the following named sheets. Sheets whose name starts with . (e.g. .notes) are silently skipped.

System and simulation sheets

Sheet nameDescription
optionsKey/value pairs written to the JSON options block (two columns: option, value)
block_arrayTime blocks (uid, name, duration)
stage_arrayInvestment stages (uid, first_block, count_block, discount_factor)
scenario_arrayScenarios (uid, probability_factor)
phase_arraySDDP phases (uid, first_stage, count_stage, aperture_set) — leave empty for monolithic solver; aperture_set is a JSON array of aperture UIDs to restrict the SDDP backward pass for this phase
scene_arraySDDP scenes (uid, first_scenario, count_scenario) — leave empty for monolithic solver
bus_arrayElectrical buses (uid, name, voltage, reference_theta, use_kirchhoff)
generator_arrayGenerators (uid, name, bus, gcost, pmax, capacity, …)
generator_profile_arrayGenerator capacity factors (uid, name, generator, profile, scost)
demand_arrayDemands (uid, name, bus, lmax, fcost, …)
demand_profile_arrayDemand scaling profiles (uid, name, demand, profile, scost)
line_arrayTransmission lines (uid, name, bus_a, bus_b, reactance, tmax_ab, tmax_ba, …)
battery_arrayBatteries (uid, name, bus, emin, emax, pmax_charge, pmax_discharge, …)
converter_arrayBattery charge/discharge converter links (uid, name, battery, generator, demand)
reserve_zone_arraySpinning-reserve zones (uid, name, urreq, drreq)
reserve_provision_arrayGenerator–zone reserve provision links (uid, name, generator, reserve_zones)
junction_arrayHydraulic junctions (uid, name, drain)
waterway_arrayWater channels between junctions (uid, name, junction_a, junction_b, fmax)
flow_arrayExternal inflows/outflows at junctions (uid, name, junction, discharge, direction)
reservoir_arrayStorage lakes/dams (uid, name, junction, emin, emax, ecost)
filtration_arrayWater seepage from waterways into reservoirs (uid, name, waterway, reservoir, slope, constant, segmentssegments is a JSON array of {"volume", "slope", "constant"} for piecewise-linear seepage)
turbine_arrayHydro turbines (uid, name, waterway, generator, conversion_rate)
reservoir_efficiency_arrayVolume-dependent turbine productivity curves (uid, name, turbine, reservoir, mean_efficiency)
user_constraint_arrayUser-defined custom LP constraints added to the problem

Time-series sheets (<tt>@</tt> convention)

Any sheet whose name contains @ encodes a time-series table that is written as a Parquet (or CSV) file to the input_directory. The naming convention is:

<component_type>@<field_name>

The sheet must contain scenario, stage, block index columns followed by one column per element (named after the element's name field).

Sheet name exampleProducesDescription
Demand@lmax<input_dir>/Demand/lmax.parquetPer-block demand limits
GeneratorProfile@profile<input_dir>/GeneratorProfile/profile.parquetSolar/wind capacity factors
Generator@pmax<input_dir>/Generator/pmax.parquetPer-block generator max output
Battery@emax<input_dir>/Battery/emax.parquetPer-block battery energy upper bound

When a time-series sheet is present, the corresponding field in the system array sheet (e.g. lmax in demand_array) should contain the string "lmax" (the file stem without extension) as a file reference.

Example: demand with a 24-hour lmax time-series

demand_array sheet:

uidnamebuslmax
1d3b3lmax
2d4b4lmax

Demand@lmax sheet:

scenariostageblockd3d4
1113020
1122818

All options

FlagDefaultDescription
XLSX (positional)Excel workbook(s) to convert (one or more)
-j, --json-file FILE<first stem>.jsonOutput JSON file path
-d, --input-directory DIR<first stem>/Directory for time-series data files
-f, --input-format {parquet,csv}parquetFormat for time-series output files
-n, --name NAME<first stem>System name written to JSON name field
-c, --compression ALGgzipParquet compression (gzip, snappy, brotli, ‘’'for none) \ilinebr </td> </tr> <tr class="markdownTableRowOdd"> <td class="markdownTableBodyNone">-p, –pretty\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Write indented (4-space) JSON instead of compact \ilinebr </td> </tr> <tr class="markdownTableRowEven"> <td class="markdownTableBodyNone">-N, –skip-nulls\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Omit keys with null/NaN values from JSON output \ilinebr </td> </tr> <tr class="markdownTableRowOdd"> <td class="markdownTableBodyNone">-U, –parse-unexpected-sheets\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Also process sheets not in the expected list \ilinebr </td> </tr> <tr class="markdownTableRowEven"> <td class="markdownTableBodyNone">-z, –zip\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Bundle JSON + data files into a ZIP archive \ilinebr </td> </tr> <tr class="markdownTableRowOdd"> <td class="markdownTableBodyNone">–validate\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Check workbook for errors without writing output (exit 0 = OK, 1 = errors) \ilinebr </td> </tr> <tr class="markdownTableRowEven"> <td class="markdownTableBodyNone">–ignore-errors\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Proceed despite errors in individual sheets (output may be incomplete) \ilinebr </td> </tr> <tr class="markdownTableRowOdd"> <td class="markdownTableBodyNone">-l, –log-level LEVEL\ilinebr </td> <td class="markdownTableBodyNone">INFO\ilinebr </td> <td class="markdownTableBodyNone"> Verbosity (DEBUG,INFO,WARNING,ERROR,CRITICAL) \ilinebr </td> </tr> <tr class="markdownTableRowEven"> <td class="markdownTableBodyNone">-V, –version\ilinebr </td> <td class="markdownTableBodyNone"> — \ilinebr </td> <td class="markdownTableBodyNone"> Print version and exit \ilinebr </td> </tr> <tr class="markdownTableRowOdd"> <td class="markdownTableBodyNone">-T, –make-template\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Generate the Excel template from C++ JSON headers instead of converting \ilinebr </td> </tr> <tr class="markdownTableRowEven"> <td class="markdownTableBodyNone">–header-dir DIR\ilinebr </td> <td class="markdownTableBodyNone"> auto-detect \ilinebr </td> <td class="markdownTableBodyNone"> Path toinclude/gtopt/(used with–make-template) \ilinebr </td> </tr> <tr class="markdownTableRowOdd"> <td class="markdownTableBodyNone">–list-sheets\ilinebr </td> <td class="markdownTableBodyNone"> off \ilinebr </td> <td class="markdownTableBodyNone"> Print sheet list from C++ headers and exit (used with–make-template`)

cvs2parquet

→ Full documentation

Converts CSV time-series files to Parquet format.

# Convert a single file
cvs2parquet input.csv output.parquet

# Use an explicit PyArrow schema for type enforcement
cvs2parquet --schema input.csv output.parquet

Columns named stage, block, or scenario are cast to int32; all other columns are cast to float64.


ts2gtopt

→ Full documentation

Projects hourly (or finer) time-series data onto a gtopt planning horizon and produces block-aggregated schedule files (Parquet or CSV) ready for use as gtopt input schedules. It also embeds an hour_block_map in the output planning JSON so that block-level solver results can be reconstructed back into a full hourly time-series.

Concepts

TermMeaning
HorizonA JSON definition of scenarios, stages (investment periods), and blocks (representative operating hours)
Block scheduleA Parquet/CSV file with scenario, stage, block, uid:X columns consumed directly by gtopt as an input schedule (e.g. Generator/pmax.parquet)
**hour_block_map**An array of {"hour": i, "stage": s, "block": b} entries embedded in the planning JSON that maps each processed calendar hour back to the (stage, block) it was projected into
**output_hour/**Post-processing output directory with scenario, hour, uid:X files reconstructed from block-level solver output

Basic usage

# Project a single hourly CSV onto an auto-generated 12-stage × 24-block horizon
ts2gtopt demand.csv -y 2023 -o input/

# Write 4 seasonal stages × 6 blocks each
ts2gtopt solar.csv -y 2023 --stages 4 --blocks 6 -o input/

# Use a custom horizon JSON and export updated block durations
ts2gtopt pmax.csv -y 2023 -H my_horizon.json --output-horizon updated_horizon.json -o input/

# Use the horizon embedded in an existing planning JSON (reads simulation.stage_array / block_array)
ts2gtopt load.csv -y 2023 -P case.json -o input/

Reconstructing hourly output

After running the gtopt solver (which produces output/Generator/generation_sol.csv etc.), use the hour_block_map embedded in the planning JSON to expand block-level results back into a full hourly time-series:

from ts2gtopt import write_output_hours

# Reads hour_block_map from case.json, writes output_hour/ next to output/
written = write_output_hours("bat_4b_2023.json")
# → output_hour/Generator/generation_sol.csv
#    scenario  hour   uid:1   uid:2
#    1          0    127.5    0.0
#    1          1    124.9    0.0
#    ...  (8760 rows for a full-year case)

Or from the public API:

from ts2gtopt import build_hour_block_map, reconstruct_output_hours, load_horizon

h = load_horizon("my_horizon.json")
hour_map = build_hour_block_map(h, year=2023)
reconstruct_output_hours("output/", hour_map, output_hour_dir="output_hour/")

CLI reference

ts2gtopt [options] INPUT [INPUT ...]

Positional arguments:
  INPUT                  Input time-series file(s) (CSV or Parquet)

Options:
  -y, --year YEAR        Calendar year for the projection (required unless -H/-P is given with explicit dates)
  -o, --output DIR       Output directory for block schedule files (default: current directory)
  -f, --format {parquet,csv}
                         Output file format (default: parquet)
  -s, --stages N         Number of planning stages (default: 12, one per calendar month)
  -b, --blocks N         Number of representative blocks per stage (default: 24, one per hour of day)
  -H, --horizon FILE     Load planning horizon from a JSON file
  -P, --planning FILE    Load horizon from an existing gtopt planning JSON
  --output-horizon FILE  Write the duration-updated horizon to FILE after projecting
  --agg {mean,median,min,max,sum}
                         Aggregation function (default: mean)
  --interval-hours H     Duration of each input observation in hours (auto-detected by default)
  --verify               Print energy-conservation ratios after projection
  -v, --verbose          Enable verbose logging

gtopt2pp

Converts a gtopt JSON case back to a pandapower network file, optionally solving a DC OPF and/or running topology diagnostics. This is the reverse direction of pp2gtopt.

Basic usage

# Convert a gtopt case to pandapower JSON
gtopt2pp cases/ieee_9b_ori/ieee_9b_ori.json

# Specify output file
gtopt2pp cases/ieee_9b_ori/ieee_9b_ori.json -o ieee9b_pp.json

# Convert and solve DC OPF
gtopt2pp cases/ieee_9b_ori/ieee_9b_ori.json --solve

# Convert all blocks (one output file per block)
gtopt2pp cases/ieee_9b/ieee_9b.json --all-blocks

# Select specific blocks (comma-separated, ranges)
gtopt2pp cases/ieee_9b/ieee_9b.json -b 1,5-10

# Run pandapower diagnostic on converted network
gtopt2pp cases/ieee_14b_ori/ieee_14b_ori.json --diagnostic

Multi-block support

When the source case has multiple blocks (e.g. 24-hour dispatch), use --all-blocks or -b SPEC to produce one pandapower network per block:

gtopt2pp cases/ieee_9b/ieee_9b.json --all-blocks
# Produces: ieee_9b_pp_b1.json, ieee_9b_pp_b2.json, …, ieee_9b_pp_b24.json

Elements skipped during conversion

The following gtopt elements have no pandapower equivalent and are silently skipped: batteries, converters, junctions, waterways, reservoirs, turbines, filtrations, flows. A summary of skipped elements is logged.

All options

FlagDefaultDescription
case_file (positional)Path to gtopt JSON case file
-o, --output PATH<stem>_pp.jsonOutput pandapower JSON file path
-s, --scenario UIDfirstScenario UID to convert
-b, --block SPECfirstBlock UID spec: single (1), list (1,2,4), range (1-5), or mix (1,3-5,8)
--solveoffRun pandapower DC OPF after conversion
--all-blocksoffConvert all blocks (one file per block)
--check / --no-checkenabledValidate source JSON via gtopt_check_json
--diagnosticoffRun pandapower topology diagnostic

run_gtopt

Smart solver wrapper that detects case types (PLP, gtopt directory, or JSON file), runs conversions when needed, and invokes the gtopt binary with appropriate runtime options. Integrates pre-flight and post-flight checks automatically.

Basic usage

# Auto-detect case type from CWD
run_gtopt

# PLP case: auto-convert to gtopt and solve
run_gtopt plp_case_2y

# gtopt case directory: solve directly
run_gtopt cases/ieee_9b

# Explicit JSON file
run_gtopt cases/ieee_9b/ieee_9b.json

# Pass extra arguments to the gtopt binary after --
run_gtopt cases/ieee_9b -- --set use_single_bus=true --stats

Pre-flight checks (enabled by default)

Before invoking the solver, run_gtopt validates:

  • JSON syntax and file readability
  • Input file existence (Parquet/CSV references)
  • Output directory writability
  • Compression codec availability (auto-fallback if unavailable)
  • System/simulation statistics (gtopt_check_json --info)
  • Full JSON validation (gtopt_check_json)

Disable with --no-check or abort on any warning with --strict.

Post-flight checks

After the solver completes, run_gtopt automatically:

  • Analyzes any error LP files via gtopt_check_lp
  • Validates solver output via gtopt_check_output
  • Prints a solution summary

All options

FlagDefaultDescription
CASE (positional)CWDPLP directory, gtopt directory, or JSON file
-t, --threads NautoNumber of LP solver threads
-C, --compression CODECzstdOutput compression codec
-o, --output-dir DIROverride output directory
--plp-args ARGSExtra arguments for plp2gtopt (quote whole string)
--check / --no-checkenabledRun pre-flight checks
--strictoffAbort on any warning
--enable-check NAMEEnable specific check (repeatable)
--disable-check NAMEDisable specific check (repeatable)
--list-checksList available checks and exit
--convert-onlyoffConvert PLP case but do not run solver
--export-json FILEWrite sanitized planning JSON to file
--dry-runoffPrint commands without executing
-l, --log-level LEVELINFOVerbosity: DEBUG, INFO, WARNING, ERROR
-V, --versionPrint version and exit
-- [ARGS]Pass remaining arguments to gtopt binary

gtopt_check_json

Validates gtopt JSON planning files and reports potential issues. Also serves as a quick system/simulation statistics tool (similar to gtopt --stats).

Basic usage

# Validate a JSON case and report issues
gtopt_check_json cases/ieee_9b/ieee_9b.json

# Print system/simulation statistics only (no validation)
gtopt_check_json --info cases/ieee_9b/ieee_9b.json

# Show detailed simulation structure (scenarios, stages, phases, apertures)
gtopt_check_json --show-simulation cases/sddp_hydro_3phase/sddp_hydro_3phase.json

# Validate multiple JSON files (merged before checking)
gtopt_check_json system.json overrides.json

# Run interactive configuration setup
gtopt_check_json --init-config

Validation checks

The tool runs configurable checks organized by severity:

SeverityColorMeaning
CRITICALRedIssues that will likely cause solver failure
WARNINGYellowPotential problems or suboptimal configurations
NOTECyanInformational observations

Individual checks can be enabled/disabled via --init-config or the configuration file (~/.gtopt.conf).

Exit codes

CodeMeaning
0OK — no critical issues (warnings/notes may be present)
1Critical issues found

All options

FlagDefaultDescription
json_files (positional)Path(s) to gtopt JSON case files
--infooffPrint system/simulation statistics and exit
--show-simulationoffPrint detailed simulation structure
--config PATH~/.gtopt.confPath to configuration file
--init-configoffRun interactive configuration setup
--no-coloroffDisable colored output

gtopt_check_lp

Diagnoses infeasible LP files generated by the gtopt solver. Combines static analysis, local IIS (Irreducible Infeasible Subsystem) solvers, and optional NEOS remote analysis to pinpoint the root cause of infeasibility.

Basic usage

# Analyze an LP file
gtopt_check_lp error_0.lp

# Analyze a gzip-compressed LP file
gtopt_check_lp error_0.lp.gz

# Auto-find the newest error*.lp[.gz] in the current directory
gtopt_check_lp --last

# Static analysis only (no solver invocation)
gtopt_check_lp error_0.lp --analyze-only

# Use a specific solver
gtopt_check_lp error_0.lp --solver coinor

# Submit to NEOS remote server (requires email)
gtopt_check_lp error_0.lp --solver neos --email user@example.com

# Run interactive configuration setup
gtopt_check_lp --init-config

Analysis pipeline

  1. Static analysis — parses the LP file directly:
    • Variables with conflicting bounds (lb > ub)
    • Empty or fixed constraints
    • Numerical range issues (very large/small coefficients)
    • Duplicate constraint names
    • Problem statistics (rows, columns, non-zeros)
  2. Local IIS finding — tries available solvers in order: CPLEX → HiGHS → COIN-OR CLP → CBC → GLPK
  3. NEOS remote analysis — submits to https://neos-server.org via XML-RPC using CPLEX (requires email)
  4. AI diagnostics *(optional)* — expert infeasibility diagnosis from Claude, OpenAI, DeepSeek, or GitHub AI

Quiet mode

When called with --quiet (used automatically by the gtopt binary and run_gtopt), the tool never fails (always exits 0), never prompts for input, and handles all errors gracefully with warnings.

All options

FlagDefaultDescription
LP_FILE (positional)Path to LP file (.lp, .lp.gz, .lp.gzip)
--lastoffAuto-find newest error*.lp[.gz]
--analyze-onlyoffStatic analysis only, no solver
-q, --quietoffNon-failing quiet mode
--solver SOLVERallSolver strategy: all, auto, cplex, highs, coinor, glpk, neos
--no-neosoffSkip NEOS submissions
--email EMAILEmail for NEOS submissions
--output FILEWrite report to file
-v, --verboseoffVerbose logging
--config FILE~/.gtopt.confConfig file path
--init-configoffInteractive config wizard

gtopt_check_output

Validates and analyzes gtopt solver output for completeness and correctness. Auto-discovers results and JSON files in a case directory.

Basic usage

# Analyze output in a case directory
gtopt_check_output cases/ieee_9b

# Explicit paths to results and JSON
gtopt_check_output -r output/ -j ieee_9b.json

# Quiet mode — only show warnings and critical findings
gtopt_check_output cases/ieee_9b --quiet

# Run interactive configuration setup
gtopt_check_output --init-config

Checks performed

  • Output completeness — verifies expected output files exist (solution.csv, generation_sol, balance_dual, etc.)
  • Load shedding analysis — detects unserved energy in fail_sol
  • Generation/demand balance — validates energy conservation
  • Line congestion ranking — identifies most congested transmission lines
  • LMP statistics — locational marginal price analysis
  • Cost breakdown — dispatches by generator cost tier

All options

FlagDefaultDescription
CASE_DIR (positional)CWDCase directory (auto-discovers JSON and results)
-r, --results-dir DIRExplicit results directory
-j, --json-file FILEExplicit planning JSON file
--no-coloroffDisable colored output
-q, --quietoffOnly show warnings and critical findings
-l, --log-level LEVELWARNINGVerbosity: DEBUG, INFO, WARNING, ERROR
--config FILE~/.gtopt.confConfig file path
--init-configoffInitialize config section
-V, --versionPrint version and exit

gtopt_check_solvers

Discovers and validates gtopt LP solver plugins by running a built-in test suite against each available solver. Useful for quickly verifying that solver backends (CLP, CBC, HiGHS, CPLEX, …) are correctly installed and produce correct results.

Basic usage

# List all available LP solver plugins
gtopt_check_solvers --list

# Check all available solvers (default)
gtopt_check_solvers

# Check a specific solver
gtopt_check_solvers --solver clp

# Check multiple specific solvers
gtopt_check_solvers --solver clp --solver highs

# Use an explicit gtopt binary path
gtopt_check_solvers --gtopt-bin /opt/gtopt/bin/gtopt

# Run only a subset of built-in tests
gtopt_check_solvers --test single_bus_lp --test kirchhoff_lp

# Verbose output (show failure details)
gtopt_check_solvers --verbose

Built-in test cases

Test nameDescriptionExpected status
single_bus_lpSingle-bus copper-plate LP: 1 generator, 1 demand0 (optimal)
kirchhoff_lp4-bus DC OPF with Kirchhoff voltage-angle constraints0 (optimal)
feasibility_lpFeasibility check: demand exactly at generator capacity0 (optimal)

For each test the tool:

  1. Writes the problem as a temporary gtopt JSON file
  2. Calls gtopt <file> --solver <name> with a per-test timeout
  3. Reads output/solution.csv and validates status and obj_value

Exit codes

CodeMeaning
0All solvers and all tests passed
1At least one test failed (or no solvers found)
2Error: gtopt binary not found or invalid path

All options

FlagDefaultDescription
--list, -loffList available solvers and exit
--solver SOLVER, -sallSolver to test (repeatable)
--test TEST, -tallTest case to run (repeatable; choices: single_bus_lp, kirchhoff_lp, feasibility_lp)
--gtopt-bin PATHauto-detectPath to the gtopt binary
--timeout SECONDS60Per-test timeout in seconds
--no-coloroffDisable colored output
-v, --verboseoffShow failure details
-V, --versionPrint version and exit

Binary discovery

The tool searches for the gtopt binary in this order:

  1. GTOPT_BIN environment variable
  2. gtopt on PATH
  3. Standard build directories (build/standalone/gtopt, etc.)

gtopt_compress_lp

Compresses LP debug files generated by the gtopt solver (when lp_debug=true or lp_build=true). Supports multiple compression formats and integrates seamlessly with the solver's post-solve workflow.

Basic usage

# Compress an LP file
gtopt_compress_lp file.lp

# Use a specific compression codec
gtopt_compress_lp file.lp --codec zstd

# Quiet mode (non-interactive, never fails)
gtopt_compress_lp file.lp --quiet

# Show available compression tools
gtopt_compress_lp --list-tools

# Run interactive setup
gtopt_compress_lp --init-config

Supported compressors

ToolExtensionNotes
zstd.zstRecommended — fast with excellent compression ratio
gzip.gzUniversally available
lz4.lz4Fastest compression/decompression
bzip2.bz2Best ratio for text files
xz.xzExcellent ratio, slower compression
lzma.lzmaSimilar to xz

Compression cascade

  1. Codec hint from --codec (if provided and available)
  2. Configured compressor from config file (auto → first available)
  3. First available tool found on PATH
  4. Skip compression (leave original unchanged)

All options

FlagDefaultDescription
FILE.lp (positional)LP file(s) to compress
--init-configoffRun interactive setup wizard
--list-toolsoffShow available compression tools
--quietoffNon-interactive, never-fail mode
--codec CODECCodec suggestion: gzip, zstd, lz4, etc.
--compressor TOOLOverride configured compressor
--config PATH~/.gtopt_compress_lp.confConfig file path
--color {auto,always,never}autoTerminal color output
--versionShow version and exit

gtopt_config

Unified configuration management library used by all gtopt Python scripts. Reads and writes a single INI-format configuration file shared across tools.

gtopt_config is a library module, not a standalone command. It is used by gtopt_check_json, gtopt_check_lp, gtopt_check_output, run_gtopt, and gtopt_compress_lp.

Configuration file

Default location: ~/.gtopt.conf. The file uses INI format with a [global] section for shared settings, per-tool sections, and a [gtopt] section for the C++ binary:

[global]
ai_enabled  = false
ai_provider = claude
ai_model    =
color       = auto

[gtopt]
solver              = highs
algorithm           = barrier
threads             = 4
output-format       = parquet
output-compression  = zstd

[gtopt_check_json]
check_uid_uniqueness = true

[gtopt_check_output]
congestion_top_n = 10

[run_gtopt]
default_threads = 0

The [gtopt] section provides default values for the C++ binary's command-line options. CLI flags always take precedence. See Usage Guide for the full list of supported keys.

Initializing configuration

Each tool that uses gtopt_config provides an --init-config flag that runs an interactive setup wizard to create or update its section:

gtopt_check_json --init-config
gtopt_check_lp --init-config
gtopt_check_output --init-config
run_gtopt --init-config     # via run_gtopt --list-checks / --init-config

sddp_monitor

Interactive SDDP solver monitoring dashboard. Polls the JSON status file written by the gtopt SDDP solver and displays live charts in two figure windows:

  • Figure 1 – Real-time charts: CPU load (%) and active worker threads vs wall-clock seconds.
  • Figure 2 – Iteration-indexed charts: objective upper/lower bounds per scene and convergence gap vs iteration number.
# Monitor the default output/sddp_status.json
sddp_monitor

# Specify a custom status file and polling interval
sddp_monitor --status-file /path/to/sddp_status.json --poll 2.0

# Headless mode (print to stdout, no GUI window)
sddp_monitor --no-gui

The tool exits when the solver reports "converged" or when you press Ctrl-C.

Text mode output

In headless mode (--no-gui), the tool prints a summary table to stdout:

[Time]  [Iter]  [LB]           [UB]           [Gap]      [Status]
  7.1s   45     12345.1234   12450.5678     0.008765   running

All options

FlagDefaultDescription
--status-file PATHoutput/sddp_status.jsonPath to SDDP status file
--poll SECONDS1.0Polling interval
--no-guioffPrint to stdout instead of GUI windows

gtopt_field_extractor

Extracts field metadata from gtopt C++ headers and generates Markdown or HTML documentation tables. Parses ///< Description [units] comments on struct member declarations and produces a table with columns: Field, C++ Type, JSON Type, Units, Required, Description.

# Dump all element tables to stdout as Markdown
gtopt_field_extractor

# Write full HTML reference to a file
gtopt_field_extractor --format html --output INPUT_DATA_API.html

# Extract only Generator and Demand elements
gtopt_field_extractor --elements Generator Demand

Using with gtopt_guisrv and gtopt_websrv

Both services accept a ZIP archive containing the JSON configuration file and its associated Parquet/CSV data files. Use plp2gtopt -z to produce a ZIP that is ready to upload without any extra packaging step.

gtopt_guisrv (browser GUI)

  1. Start the GUI service:

    gtopt_guisrv
    # Open http://localhost:5001
  2. Click Upload case and select the .zip file produced by plp2gtopt -z.
  3. Edit the case if needed, then click Solve to submit it to the webservice.

For installation and service setup, see guiservice/INSTALL.md.

gtopt_websrv (REST API)

Submit a ZIP directly to the REST API:

curl -X POST http://localhost:3000/api/solve \
  -F "file=@gtopt_case_2y.zip"

The server runs the gtopt solver and returns a results ZIP containing the solution files.

For full API reference and deployment instructions, see webservice/INSTALL.md.