System
representing the entire U.S.Originally Contributed by: Clayton Barrows
This example demonstrates how to assemble a System
representing the entire U.S. using
PowerSystems.jl and the data assembled by
Xu, et. al.. We'll use the same tabular data parsing
capability demonstrated on the RTS-GMLC dataset.
Activating project at `~/work/SIIP-Tutorial/SIIP-Tutorial/how-to/create-system-representing-united-states`
Installed Mustache ────────────── v1.0.13
Installed OpenSSL_jll ─────────── v1.1.13+0
Installed Preferences ─────────── v1.2.4
Installed DiffRules ───────────── v1.10.0
Installed Mocking ─────────────── v0.7.3
Installed JSON3 ───────────────── v1.9.3
Installed IfElse ──────────────── v0.1.1
Installed SentinelArrays ──────── v1.3.12
Installed Parsers ─────────────── v2.2.3
Installed DataAPI ─────────────── v1.9.0
Installed SpecialFunctions ────── v2.1.4
Installed PowerSystems ────────── v1.20.0
Installed TimeZones ───────────── v1.7.2
Installed PooledArrays ────────── v1.4.0
Installed InlineStrings ───────── v1.1.2
Installed TranscodingStreams ──── v0.9.6
Installed ChainRulesCore ──────── v1.13.0
Installed StaticArrays ────────── v1.4.1
Installed HDF5 ────────────────── v0.16.2
Installed ArrayInterface ──────── v5.0.1
Installed FilePathsBase ───────── v0.9.17
Installed ForwardDiff ─────────── v0.10.25
Installed DataFrames ──────────── v1.3.2
Installed LogExpFunctions ─────── v0.3.7
Installed DataStructures ──────── v0.18.11
Installed StatsAPI ────────────── v1.2.1
Installed Static ──────────────── v0.6.0
Installed Compat ──────────────── v3.42.0
Installed InverseFunctions ────── v0.1.3
Installed HDF5_jll ────────────── v1.12.1+0
Installed ChangesOfVariables ──── v0.1.2
Installed InfrastructureSystems ─ v1.17.1
Installed StructTypes ─────────── v1.8.1
Installed FiniteDiff ──────────── v2.11.0
Installed CSV ─────────────────── v0.10.3
Building HDF5 ─────→ `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/ed6c28c220375a214d07fba0e3d3382d8edd779e/build.log`
Building TimeZones → `~/.julia/scratchspaces/44cfe95a-1eb2-52ea-b672-e2afdf69b78f/2d4b6de8676b34525ac518de36006dc2e89c7e2e/build.log`
Precompiling project...
[33m ✓ [39m[90mCompat[39m
[33m ✓ [39m[90mSentinelArrays[39m
[33m ✓ [39m[90mStructTypes[39m
[33m ✓ [39m[90mInverseFunctions[39m
[32m ✓ [39m[90mIfElse[39m
[33m ✓ [39m[90mDataAPI[39m
[33m ✓ [39m[90mStatsAPI[39m
[33m ✓ [39m[90mTranscodingStreams[39m
[33m ✓ [39m[90mPreferences[39m
[33m ✓ [39m[90mDataStructures[39m
[33m ✓ [39m[90mFilePathsBase[39m
[33m ✓ [39m[90mStaticArrays[39m
[32m ✓ [39m[90mMocking[39m
[33m ✓ [39m[90mChainRulesCore[39m
[32m ✓ [39m[90mStatic[39m
[33m ✓ [39m[90mPooledArrays[39m
[33m ✓ [39m[90mTables[39m
[33m ✓ [39m[90mMissings[39m
[33m ✓ [39m[90mCodecZlib[39m
[33m ✓ [39m[90mDistances[39m
[33m ✓ [39m[90mJLLWrappers[39m
[33m ✓ [39m[90mSortingAlgorithms[39m
[33m ✓ [39m[90mChangesOfVariables[39m
[33m ✓ [39m[90mDiffResults[39m
[33m ✓ [39m[90mMustache[39m
[32m ✓ [39m[90mArrayInterface[39m
[33m ✓ [39mTimeSeries
[33m ✓ [39m[90mOpenSSL_jll[39m
[33m ✓ [39m[90mZstd_jll[39m
[33m ✓ [39m[90mLibiconv_jll[39m
[33m ✓ [39m[90mPrettyTables[39m
[33m ✓ [39m[90mLz4_jll[39m
[33m ✓ [39m[90mOpenSpecFun_jll[39m
[33m ✓ [39m[90mLogExpFunctions[39m
[33m ✓ [39m[90mFiniteDiff[39m
[33m ✓ [39m[90mStringEncodings[39m
[33m ✓ [39m[90mHDF5_jll[39m
[33m ✓ [39m[90mBlosc_jll[39m
[33m ✓ [39m[90mSpecialFunctions[39m
[33m ✓ [39m[90mYAML[39m
[33m ✓ [39m[90mHDF5[39m
[33m ✓ [39m[90mBlosc[39m
[33m ✓ [39m[90mDiffRules[39m
[33m ✓ [39m[90mH5Zblosc[39m
[33m ✓ [39m[90mForwardDiff[39m
[33m ✓ [39m[90mNLSolversBase[39m
[33m ✓ [39m[90mLineSearches[39m
[33m ✓ [39m[90mParsers[39m
[33m ✓ [39m[90mNLsolve[39m
[33m ✓ [39m[90mInlineStrings[39m
[33m ✓ [39m[90mJSON3[39m
[33m ✓ [39m[90mWeakRefStrings[39m
[33m ✓ [39mDataFrames
[32m ✓ [39mTimeZones
[33m ✓ [39mCSV
[33m ✓ [39m[90mInfrastructureSystems[39m
[33m ✓ [39mPowerSystems
57 dependencies successfully precompiled in 189 seconds (29 already precompiled)
[33m52[39m dependencies precompiled but different versions are currently loaded. Restart julia to access the new versions
using PowerSystems
using TimeSeries
using Dates
using TimeZones
using DataFrames
using CSV
abstract type AbstractOS end
abstract type Unix <: AbstractOS end
abstract type BSD <: Unix end
abstract type Windows <: AbstractOS end
abstract type MacOS <: BSD end
abstract type Linux <: BSD end
if Sys.iswindows()
const OS = Windows
elseif Sys.isapple()
const OS = MacOS
else
const OS = Linux
end
function unzip(::Type{<:BSD}, filename, directory)
@assert success(`tar -xvf $filename -C $directory`) "Unable to extract $filename to $directory"
end
function unzip(::Type{Windows}, filename, directory)
path_7z = if Base.VERSION < v"0.7-"
"$JULIA_HOME/7z"
else
sep = Sys.iswindows() ? ";" : ":"
withenv(
"PATH" => string(
joinpath(Sys.BINDIR, "..", "libexec"),
sep,
Sys.BINDIR,
sep,
ENV["PATH"],
),
) do
Sys.which("7z")
end
end
@assert success(`$path_7z x $filename -y -o$directory`) "Unable to extract $filename to $directory"
end
unzip (generic function with 2 methods)
PowerSystems.jl links to some test data that is suitable for this example. Let's download the test data.
println("downloading data...")
datadir = joinpath(Utils.path(:folder), "how-to/create-system-representing-united-states/data")
siip_data = joinpath(datadir, "SIIP")
if !isdir(datadir)
mkpath(datadir)
tempfilename = download("https://zenodo.org/record/3753177/files/USATestSystem.zip?download=1")
unzip(OS, tempfilename, datadir)
mkpath(siip_data)
end
config_dir = joinpath(
joinpath(Utils.path(:folder), "how-to/create-system-representing-united-states"),
"config",
)
LoadError: AssertionError: Unable to extract /tmp/jl_tEVy9o to /home/runner/work/SIIP-Tutorial/SIIP-Tutorial/how-to/create-system-representing-united-states/data
Stacktrace:
[1] unzip(#unused#::Type{Main.__FRANKLIN_4456949.Linux}, filename::String, directory::String)
@ Main.__FRANKLIN_4456949 ./string:25
This is a big dataset. Typically one would only want to include one of the interconnects
available. Lets use Texas to start. You can set interconnect = nothing
if you want everything.
interconnect = "Texas"
timezone = FixedTimeZone("UTC-6")
initial_time = ZonedDateTime(DateTime("2016-01-01T00:00:00"), timezone)
2016-01-01T00:00:00-06:00
There are a few minor incompatibilities between the data and the supported tabular data format. We can resolve those here.
First, PowerSystems.jl only supports parsing piecewise linear generator costs from tabular data. So, we can sample the quadratic polynomial cost curves and provide PWL points.
println("formatting data ...")
!isnothing(interconnect) && println("filtering data to include $interconnect ...")
gen = DataFrame(CSV.File(joinpath(datadir, "plant.csv")))
filter!(row -> row[:interconnect] == interconnect, gen)
gencost = DataFrame(CSV.File(joinpath(datadir, "gencost.csv")))
gen = innerjoin(gen, gencost, on = :plant_id, makeunique = true, validate = (false, false))
function make_pwl(gen::DataFrame, traunches = 2)
output_pct_cols = ["output_point_" * string(i) for i in 0:traunches]
hr_cols = ["heat_rate_incr_" * string(i) for i in 1:traunches]
pushfirst!(hr_cols, "heat_rate_avg_0")
columns =
NamedTuple{Tuple(Symbol.(vcat(output_pct_cols, hr_cols)))}(repeat([Float64[]], 6))
pwl = DataFrame(columns)
for row in eachrow(gen)
traunch_len = (1.0 - row.Pmin / row.Pmax) / traunches
pct = [row.Pmin / row.Pmax + i * traunch_len for i in 0:traunches]
#c(pct) = pct * row.Pmax * (row.GenIOB + row.GenIOC^2 + row.GenIOD^3)
c(pct) = pct * row.Pmax * (row.c1 + row.c2^2) + row.c0 #this formats the "c" columns to hack the heat rate parser in PSY
hr = [c(pct[1])]
[push!(hr, c(pct[i + 1]) - hr[i]) for i in 1:traunches]
push!(pwl, vcat(pct, hr))
end
return hcat(gen, pwl)
end
gen = make_pwl(gen);
gen[!, "fuel_price"] .= 1000.0; #this formats the "c" columns to hack the heat rate parser in PSY
LoadError: ArgumentError: "/home/runner/work/SIIP-Tutorial/SIIP-Tutorial/how-to/create-system-representing-united-states/data/plant.csv" is not a valid file or doesn't exist
Stacktrace:
[1] CSV.Context(source::CSV.Arg, header::CSV.Arg, normalizenames::CSV.Arg, datarow::CSV.Arg, skipto::CSV.Arg, footerskip::CSV.Arg, transpose::CSV.Arg, comment::CSV.Arg, ignoreemptyrows::CSV.Arg, ignoreemptylines::CSV.Arg, select::CSV.Arg, drop::CSV.Arg, limit::CSV.Arg, buffer_in_memory::CSV.Arg, threaded::CSV.Arg, ntasks::CSV.Arg, tasks::CSV.Arg, rows_to_check::CSV.Arg, lines_to_check::CSV.Arg, missingstrings::CSV.Arg, missingstring::CSV.Arg, delim::CSV.Arg, ignorerepeated::CSV.Arg, quoted::CSV.Arg, quotechar::CSV.Arg, openquotechar::CSV.Arg, closequotechar::CSV.Arg, escapechar::CSV.Arg, dateformat::CSV.Arg, dateformats::CSV.Arg, decimal::CSV.Arg, truestrings::CSV.Arg, falsestrings::CSV.Arg, stripwhitespace::CSV.Arg, type::CSV.Arg, types::CSV.Arg, typemap::CSV.Arg, pool::CSV.Arg, downcast::CSV.Arg, lazystrings::CSV.Arg, stringtype::CSV.Arg, strict::CSV.Arg, silencewarnings::CSV.Arg, maxwarnings::CSV.Arg, debug::CSV.Arg, parsingdebug::CSV.Arg, validate::CSV.Arg, streaming::CSV.Arg)
@ CSV ~/.julia/packages/CSV/jFiCn/src/context.jl:236
[2] #File#25
@ ~/.julia/packages/CSV/jFiCn/src/file.jl:221 [inlined]
[3] CSV.File(source::String)
@ CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:221
There are some incomplete aspects of this dataset. Here, I've assigned some approximate minimum up/down times, and some minor adjustments to categories. There are better ways to do this, but this works for this script...
gen[:, :unit_type] .= "OT"
gen[:, :min_up_time] .= 0.0
gen[:, :min_down_time] .= 0.0
gen[:, :ramp_30] .= gen[:, :ramp_30] ./ 30.0 # we need ramp rates in MW/min
[
gen[gen.type .== "wind", col] .= ["Wind", 0.0, 0.0][ix] for
(ix, col) in enumerate([:unit_type, :min_up_time, :min_down_time])
]
[
gen[gen.type .== "solar", col] .= ["PV", 0.0, 0.0][ix] for
(ix, col) in enumerate([:unit_type, :min_up_time, :min_down_time])
]
[
gen[gen.type .== "hydro", col] .= ["HY", 0.0, 0.0][ix] for
(ix, col) in enumerate([:unit_type, :min_up_time, :min_down_time])
]
[
gen[gen.type .== "ng", col] .= [4.5, 8][ix] for
(ix, col) in enumerate([:min_up_time, :min_down_time])
]
[
gen[gen.type .== "coal", col] .= [24, 48][ix] for
(ix, col) in enumerate([:min_up_time, :min_down_time])
]
[
gen[gen.type .== "nuclear", col] .= [72, 72][ix] for
(ix, col) in enumerate([:min_up_time, :min_down_time])
]
LoadError: UndefVarError: gen not defined
Stacktrace:
At the moment, PowerSimulations can't do unit commitment with generators that have Pmin = 0.0
idx_zero_pmin = [
g.type in ["ng", "coal", "hydro", "nuclear"] && g.Pmin <= 0 for
g in eachrow(gen[:, [:type, :Pmin]])
]
gen[idx_zero_pmin, :Pmin] = gen[idx_zero_pmin, :Pmax] .* 0.05
gen[:, :name] = "gen" .* string.(gen.plant_id)
CSV.write(joinpath(siip_data, "gen.csv"), gen)
LoadError: UndefVarError: gen not defined
Stacktrace:
Let's also merge the zone.csv with the bus.csv and identify bus types
bus = DataFrame(CSV.File(joinpath(datadir, "bus.csv")))
!isnothing(interconnect) && filter!(row -> row[:interconnect] == interconnect, bus)
zone = DataFrame(CSV.File(joinpath(datadir, "zone.csv")))
bus = leftjoin(bus, zone, on = :zone_id)
bustypes = Dict(1 => "PV", 2 => "PQ", 3 => "REF", 4 => "ISOLATED")
bus.bustype = [bustypes[b] for b in bus.type]
filter!(row -> row[:bustype] != PowerSystems.BusTypes.ISOLATED, bus)
bus.name = "bus" .* string.(bus.bus_id)
CSV.write(joinpath(siip_data, "bus.csv"), bus)
LoadError: ArgumentError: "/home/runner/work/SIIP-Tutorial/SIIP-Tutorial/how-to/create-system-representing-united-states/data/bus.csv" is not a valid file or doesn't exist
Stacktrace:
[1] CSV.Context(source::CSV.Arg, header::CSV.Arg, normalizenames::CSV.Arg, datarow::CSV.Arg, skipto::CSV.Arg, footerskip::CSV.Arg, transpose::CSV.Arg, comment::CSV.Arg, ignoreemptyrows::CSV.Arg, ignoreemptylines::CSV.Arg, select::CSV.Arg, drop::CSV.Arg, limit::CSV.Arg, buffer_in_memory::CSV.Arg, threaded::CSV.Arg, ntasks::CSV.Arg, tasks::CSV.Arg, rows_to_check::CSV.Arg, lines_to_check::CSV.Arg, missingstrings::CSV.Arg, missingstring::CSV.Arg, delim::CSV.Arg, ignorerepeated::CSV.Arg, quoted::CSV.Arg, quotechar::CSV.Arg, openquotechar::CSV.Arg, closequotechar::CSV.Arg, escapechar::CSV.Arg, dateformat::CSV.Arg, dateformats::CSV.Arg, decimal::CSV.Arg, truestrings::CSV.Arg, falsestrings::CSV.Arg, stripwhitespace::CSV.Arg, type::CSV.Arg, types::CSV.Arg, typemap::CSV.Arg, pool::CSV.Arg, downcast::CSV.Arg, lazystrings::CSV.Arg, stringtype::CSV.Arg, strict::CSV.Arg, silencewarnings::CSV.Arg, maxwarnings::CSV.Arg, debug::CSV.Arg, parsingdebug::CSV.Arg, validate::CSV.Arg, streaming::CSV.Arg)
@ CSV ~/.julia/packages/CSV/jFiCn/src/context.jl:236
[2] #File#25
@ ~/.julia/packages/CSV/jFiCn/src/file.jl:221 [inlined]
[3] CSV.File(source::String)
@ CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:221
We need branch names as strings
branch = DataFrame(CSV.File(joinpath(datadir, "branch.csv")))
branch = leftjoin(
branch,
DataFrames.rename!(bus[:, [:bus_id, :baseKV]], [:from_bus_id, :from_baseKV]),
on = :from_bus_id,
)
branch = leftjoin(
branch,
DataFrames.rename!(bus[:, [:bus_id, :baseKV]], [:to_bus_id, :to_baseKV]),
on = :to_bus_id,
)
!isnothing(interconnect) && filter!(row -> row[:interconnect] == interconnect, branch)
branch.name = "branch" .* string.(branch.branch_id)
branch.tr_ratio = branch.from_baseKV ./ branch.to_baseKV
CSV.write(joinpath(siip_data, "branch.csv"), branch)
LoadError: ArgumentError: "/home/runner/work/SIIP-Tutorial/SIIP-Tutorial/how-to/create-system-representing-united-states/data/branch.csv" is not a valid file or doesn't exist
Stacktrace:
[1] CSV.Context(source::CSV.Arg, header::CSV.Arg, normalizenames::CSV.Arg, datarow::CSV.Arg, skipto::CSV.Arg, footerskip::CSV.Arg, transpose::CSV.Arg, comment::CSV.Arg, ignoreemptyrows::CSV.Arg, ignoreemptylines::CSV.Arg, select::CSV.Arg, drop::CSV.Arg, limit::CSV.Arg, buffer_in_memory::CSV.Arg, threaded::CSV.Arg, ntasks::CSV.Arg, tasks::CSV.Arg, rows_to_check::CSV.Arg, lines_to_check::CSV.Arg, missingstrings::CSV.Arg, missingstring::CSV.Arg, delim::CSV.Arg, ignorerepeated::CSV.Arg, quoted::CSV.Arg, quotechar::CSV.Arg, openquotechar::CSV.Arg, closequotechar::CSV.Arg, escapechar::CSV.Arg, dateformat::CSV.Arg, dateformats::CSV.Arg, decimal::CSV.Arg, truestrings::CSV.Arg, falsestrings::CSV.Arg, stripwhitespace::CSV.Arg, type::CSV.Arg, types::CSV.Arg, typemap::CSV.Arg, pool::CSV.Arg, downcast::CSV.Arg, lazystrings::CSV.Arg, stringtype::CSV.Arg, strict::CSV.Arg, silencewarnings::CSV.Arg, maxwarnings::CSV.Arg, debug::CSV.Arg, parsingdebug::CSV.Arg, validate::CSV.Arg, streaming::CSV.Arg)
@ CSV ~/.julia/packages/CSV/jFiCn/src/context.jl:236
[2] #File#25
@ ~/.julia/packages/CSV/jFiCn/src/file.jl:221 [inlined]
[3] CSV.File(source::String)
@ CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:221
The PowerSystems parser expects the files to be named a certain way.
And, we need a control_mode
column in dc-line data
dcbranch = DataFrame(CSV.File(joinpath(datadir, "dcline.csv")))
!isnothing(interconnect) && filter!(row -> row[:from_bus_id] in bus.bus_id, dcbranch)
!isnothing(interconnect) && filter!(row -> row[:to_bus_id] in bus.bus_id, dcbranch)
dcbranch.name = "dcbranch" .* string.(dcbranch.dcline_id)
dcbranch[:, :control_mode] .= "Power"
CSV.write(joinpath(siip_data, "dc_branch.csv"), dcbranch)
LoadError: ArgumentError: "/home/runner/work/SIIP-Tutorial/SIIP-Tutorial/how-to/create-system-representing-united-states/data/dcline.csv" is not a valid file or doesn't exist
Stacktrace:
[1] CSV.Context(source::CSV.Arg, header::CSV.Arg, normalizenames::CSV.Arg, datarow::CSV.Arg, skipto::CSV.Arg, footerskip::CSV.Arg, transpose::CSV.Arg, comment::CSV.Arg, ignoreemptyrows::CSV.Arg, ignoreemptylines::CSV.Arg, select::CSV.Arg, drop::CSV.Arg, limit::CSV.Arg, buffer_in_memory::CSV.Arg, threaded::CSV.Arg, ntasks::CSV.Arg, tasks::CSV.Arg, rows_to_check::CSV.Arg, lines_to_check::CSV.Arg, missingstrings::CSV.Arg, missingstring::CSV.Arg, delim::CSV.Arg, ignorerepeated::CSV.Arg, quoted::CSV.Arg, quotechar::CSV.Arg, openquotechar::CSV.Arg, closequotechar::CSV.Arg, escapechar::CSV.Arg, dateformat::CSV.Arg, dateformats::CSV.Arg, decimal::CSV.Arg, truestrings::CSV.Arg, falsestrings::CSV.Arg, stripwhitespace::CSV.Arg, type::CSV.Arg, types::CSV.Arg, typemap::CSV.Arg, pool::CSV.Arg, downcast::CSV.Arg, lazystrings::CSV.Arg, stringtype::CSV.Arg, strict::CSV.Arg, silencewarnings::CSV.Arg, maxwarnings::CSV.Arg, debug::CSV.Arg, parsingdebug::CSV.Arg, validate::CSV.Arg, streaming::CSV.Arg)
@ CSV ~/.julia/packages/CSV/jFiCn/src/context.jl:236
[2] #File#25
@ ~/.julia/packages/CSV/jFiCn/src/file.jl:221 [inlined]
[3] CSV.File(source::String)
@ CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:221
timeseries = []
ts_csv = ["wind", "solar", "hydro", "demand"]
plant_ids = Symbol.(string.(gen.plant_id))
for f in ts_csv
println("formatting $f.csv ...")
csvpath = joinpath(siip_data, f * ".csv")
csv = DataFrame(CSV.File(joinpath(datadir, f * ".csv")))
(category, name_prefix, label) =
f == "demand" ? ("Area", "", "max_active_power") :
("Generator", "gen", "max_active_power")
if !(:DateTime in names(csv))
DataFrames.rename!(
csv,
(names(csv)[occursin.("UTC", String.(names(csv)))][1] => :DateTime),
)
#The timeseries data is in UTC, this converts it to a fixed UTC offset
csv.DateTime =
ZonedDateTime.(
DateTime.(csv.DateTime, "yyyy-mm-dd HH:MM:SS"),
timezone,
from_utc = true,
)
delete!(csv, csv.DateTime .< initial_time)
csv.DateTime = Dates.format.(csv.DateTime, "yyyy-mm-ddTHH:MM:SS")
end
device_names = f == "demand" ? unique(bus.zone_name) : gen.name
for id in names(csv)
colname = id
if f == "demand"
if Symbol(id) in Symbol.(zone.zone_id)
colname = Symbol(zone[Symbol.(zone.zone_id) .== Symbol(id), :zone_name][1])
DataFrames.rename!(csv, (id => colname))
end
sf = sum(bus[string.(bus.zone_id) .== id, :Pd])
else
if Symbol(id) in plant_ids
colname = Symbol(gen[Symbol.(gen.plant_id) .== Symbol(id), :name][1])
DataFrames.rename!(csv, (id => colname))
end
sf = maximum(csv[:, colname]) == 0.0 ? 1.0 : "Max"
end
if String(colname) in device_names
push!(
timeseries,
Dict(
"simulation" => "DA",
"category" => category,
"module" => "InfrastructureSystems",
"type" => "SingleTimeSeries",
"component_name" => String(colname),
"name" => label,
"resolution" => 3600,
"scaling_factor_multiplier" => "get_max_active_power",
"scaling_factor_multiplier_module" => "PowerSystems",
"normalization_factor" => sf,
"data_file" => csvpath,
),
)
end
end
CSV.write(csvpath, csv)
end
timeseries_pointers = joinpath(siip_data, "timeseries_pointers.json")
open(timeseries_pointers, "w") do io
PowerSystems.InfrastructureSystems.JSON3.write(io, timeseries)
end
LoadError: UndefVarError: gen not defined
Stacktrace:
The tabular data format relies on a folder containing *.csv
files and .yaml
files
describing the column names of each file in PowerSystems terms, and the PowerSystems
data type that should be created for each generator type. The respective "usdecriptors.yaml"
and "USgenerator_mapping.yaml" files have already been tailored to this dataset.
println("parsing csv files...")
rawsys = PowerSystems.PowerSystemTableData(
siip_data,
100.0,
joinpath(config_dir, "us_descriptors.yaml"),
generator_mapping_file = joinpath(config_dir, "us_generator_mapping.yaml"),
)
LoadError: UndefVarError: config_dir not defined
Stacktrace:
System
Next, we'll create a System
from the rawsys
data. Since a System
is predicated on a
time series resolution and the rawsys
data includes both 5-minute and 1-hour resolution
time series, we also need to specify which time series we want to include in the System
.
The time_series_resolution
kwarg filters to only include time series with a matching resolution.
println("creating System")
sys = System(rawsys; config_path = joinpath(config_dir, "us_system_validation.json"));
show(stdout, "text/plain", sys)
LoadError: UndefVarError: config_dir not defined
Stacktrace:
This all took reasonably long, so we can save our System
using the serialization
capability included with PowerSystems.jl:
to_json(sys, joinpath(siip_data, "sys.json"), force = true)