power_grid_model_io
converters
Abstract converter class
- class power_grid_model_io.converters.base_converter.BaseConverter(source: BaseDataStore[T] | None = None, destination: BaseDataStore[T] | None = None, log_level: int = 20)
Abstract converter class
Methods
convert
(data[, extra_info])Convert input/update/(a)sym_output data and optionally extra info.
Get the log level
load_asym_output_data
([data])Load asymmetric output data
load_input_data
([data, make_extra_info])Load input data and extra info
load_sc_output_data
([data])Load sc output data
load_sym_output_data
([data])Load symmetric output data
load_update_data
([data])Load update data
save
(data[, extra_info, destination])Save input/update/(a)sym_output data and optionally extra info.
set_log_level
(log_level)Set the log level
- __init__(source: BaseDataStore[T] | None = None, destination: BaseDataStore[T] | None = None, log_level: int = 20)
Initialize a logger
- load_input_data(data: T | None = None, make_extra_info: bool = True) Tuple[Dict[str, ndarray], Dict[int, Any]]
Load input data and extra info
Note: You shouldn’t have to overwrite this method. Check _parse_data() instead.
- Args:
data: Optionally supply data in source format. If no data is supplied, it is loaded from self._source make_extra_info: For efficiency reasons, one can disable the creation of extra_info.
Returns:
- load_update_data(data: T | None = None) Dict[str, ndarray] | Dict[str, ndarray | Dict[str, ndarray]]
Load update data
Note: You shouldn’t have to overwrite this method. Check _parse_data() instead.
- Args:
data: Optional[T]: (Default value = None)
Returns:
- load_sym_output_data(data: T | None = None) Dict[str, ndarray] | Dict[str, ndarray | Dict[str, ndarray]]
Load symmetric output data
Note: You shouldn’t have to overwrite this method. Check _parse_data() instead.
- Args:
data: Optional[T]: (Default value = None)
Returns:
- load_asym_output_data(data: T | None = None) Dict[str, ndarray] | Dict[str, ndarray | Dict[str, ndarray]]
Load asymmetric output data
Note: You shouldn’t have to overwrite this method. Check _parse_data() instead.
- Args:
data: Optional[T]: (Default value = None)
Returns:
- load_sc_output_data(data: T | None = None) Dict[str, ndarray] | Dict[str, ndarray | Dict[str, ndarray]]
Load sc output data
Note: You shouldn’t have to overwrite this method. Check _parse_data() instead.
- Args:
data: Optional[T]: (Default value = None)
Returns:
- convert(data: Dict[str, ndarray] | Dict[str, ndarray | Dict[str, ndarray]], extra_info: Dict[int, Any] | None = None) T
Convert input/update/(a)sym_output data and optionally extra info.
Note: You shouldn’t have to overwrite this method. Check _serialize_data() instead.
- Args:
data: Dataset: extra_info: Optional[ExtraInfo]: (Default value = None)
Returns:
- save(data: Dict[str, ndarray] | Dict[str, ndarray | Dict[str, ndarray]], extra_info: Dict[int, Any] | None = None, destination: BaseDataStore[T] | None = None) None
Save input/update/(a)sym_output data and optionally extra info.
Note: You shouldn’t have to overwrite this method. Check _serialize_data() instead.
- Args:
data: Dataset: extra_info: Optional[ExtraInfo]: (Default value = None) destination: Optional[BaseDataStore[T]]: (Default value = None)
Returns:
- set_log_level(log_level: int) None
Set the log level
- Args:
log_level: int:
- get_log_level() int
Get the log level
- Returns:
int:
Power Grid Model ‘Converter’: Load and store power grid model data in the native PGM JSON format.
- class power_grid_model_io.converters.pgm_json_converter.PgmJsonConverter(source_file: Path | str | None = None, destination_file: Path | str | None = None, log_level: int = 20)
A ‘converter’ class to load and store power grid model data in the native PGM JSON format. The methods are simmilar to the utils in power_grid_model, with the addition of storing and loading ‘extra info’. Extra info is the set of attributes that don’t match the power grid model’s internal structure, but are important to keep close to the data. The most common example is the original object ID, if the original IDs are not numeric, or not unique over all components.
Args:
Returns:
Methods
convert
(data[, extra_info])Convert input/update/(a)sym_output data and optionally extra info.
get_log_level
()Get the log level
load_asym_output_data
([data])Load asymmetric output data
load_input_data
([data, make_extra_info])Load input data and extra info
load_sc_output_data
([data])Load sc output data
load_sym_output_data
([data])Load symmetric output data
load_update_data
([data])Load update data
save
(data[, extra_info, destination])Save input/update/(a)sym_output data and optionally extra info.
set_log_level
(log_level)Set the log level
- __init__(source_file: Path | str | None = None, destination_file: Path | str | None = None, log_level: int = 20)
Initialize a logger
Tabular Data Converter: Load data from multiple tables and use a mapping file to convert the data to PGM
- class power_grid_model_io.converters.tabular_converter.TabularConverter(mapping_file: Path | None = None, source: BaseDataStore[TabularData] | None = None, destination: BaseDataStore[TabularData] | None = None, log_level: int = 20)
Tabular Data Converter: Load data from multiple tables and use a mapping file to convert the data to PGM
Methods
convert
(data[, extra_info])Convert input/update/(a)sym_output data and optionally extra info.
get_id
(table, key[, name])Get a the numerical ID previously associated with the supplied name / key combination Args: table: Table name (e.g. "Nodes") key: Component identifier (e.g. {"name": "node1"} or {"number": 1, "sub_number": 2}) name: Optional component name (e.g. "internal_node") Returns: The associated id.
get_ids
(keys[, table, name])Get a the numerical ID previously associated with the supplied name / key combination Args: keys: Component identifiers (e.g. a pandas Dataframe with columns "number" and "sub_number") table: Table name (e.g. "Nodes") name: Optional component name (e.g. "internal_node") Returns: The associated id.
get_log_level
()Get the log level
load_asym_output_data
([data])Load asymmetric output data
load_input_data
([data, make_extra_info])Load input data and extra info
load_sc_output_data
([data])Load sc output data
load_sym_output_data
([data])Load symmetric output data
load_update_data
([data])Load update data
lookup_id
(pgm_id)Retrieve the original name / key combination of a pgm object Args: pgm_id: a unique numerical ID Returns: The original name / key combination
lookup_ids
(pgm_ids)Retrieve the original name / key combination of a list of pgm object Args: pgm_ids: a list of unique numerical ID Returns: A (possibly sparse) pandas dataframe storing all the original reference data
save
(data[, extra_info, destination])Save input/update/(a)sym_output data and optionally extra info.
set_log_level
(log_level)Set the log level
set_mapping
(mapping)Interpret a mapping structure. This includes:
set_mapping_file
(mapping_file)Read, parse and interpret a mapping file. Args: mapping_file: The path to the mapping file.
- __init__(mapping_file: Path | None = None, source: BaseDataStore[TabularData] | None = None, destination: BaseDataStore[TabularData] | None = None, log_level: int = 20)
Prepare some member variables and optionally load a mapping file
- Args:
mapping_file: A yaml file containing the mapping.
- set_mapping_file(mapping_file: Path) None
Read, parse and interpret a mapping file. Args:
mapping_file: The path to the mapping file
- set_mapping(mapping: Mapping[str, Any]) None
- Interpret a mapping structure. This includes:
the table to table mapping (‘grid’)
the unit conversions (‘units’)
the value substitutions (‘substitutions’) (e.g. enums or other one-on-one value mapping)
column multipliers (‘multipliers’) (e.g. values in a column ending with _kv should be multiplied by 1000.0)
- Args:
mapping: A mapping structure (dictionary):
- get_id(table: str, key: Mapping[str, int], name: str | None = None) int
Get a the numerical ID previously associated with the supplied name / key combination Args:
table: Table name (e.g. “Nodes”) key: Component identifier (e.g. {“name”: “node1”} or {“number”: 1, “sub_number”: 2}) name: Optional component name (e.g. “internal_node”)
Returns: The associated id
- get_ids(keys: DataFrame, table: str | None = None, name: str | None = None) List[int]
Get a the numerical ID previously associated with the supplied name / key combination Args:
keys: Component identifiers (e.g. a pandas Dataframe with columns “number” and “sub_number”) table: Table name (e.g. “Nodes”) name: Optional component name (e.g. “internal_node”)
Returns: The associated id
- lookup_id(pgm_id: int) Dict[str, str | Dict[str, int]]
Retrieve the original name / key combination of a pgm object Args:
pgm_id: a unique numerical ID
Returns: The original name / key combination
- lookup_ids(pgm_ids: Collection[int]) DataFrame
Retrieve the original name / key combination of a list of pgm object Args:
pgm_ids: a list of unique numerical ID
Returns: A (possibly sparse) pandas dataframe storing all the original reference data
Panda Power Converter
- class power_grid_model_io.converters.pandapower_converter.PandaPowerConverter(system_frequency: float = 50.0, trafo_loading: str = 'current', log_level: int = 20)
Panda Power Converter
- Attributes:
- idx
- idx_lookup
- next_idx
- pgm_input_data
- pp_input_data
- system_frequency
Methods
convert
(data[, extra_info])Convert input/update/(a)sym_output data and optionally extra info.
get_id
(pp_table, pp_idx[, name])Get a numerical ID previously associated with the supplied table / index combination
get_individual_switch_states
(component, ...)Get the state of an individual switch.
get_log_level
()Get the log level
get_switch_states
(pp_table)Return switch states of either Lines or Transformers
get_trafo3w_switch_states
(trafo3w)Return switch states of Three Winding Transformers
This function extracts Three Winding Transformers' "winding_type" attribute through "vector_group" attribute.
This function extracts Transformers' "winding_type" attribute through "vector_group" attribut.
load_asym_output_data
([data])Load asymmetric output data
load_input_data
([data, make_extra_info])Load input data and extra info
load_sc_output_data
([data])Load sc output data
load_sym_output_data
([data])Load symmetric output data
load_update_data
([data])Load update data
lookup_id
(pgm_id)Retrieve the original name / key combination of a pgm object
save
(data[, extra_info, destination])Save input/update/(a)sym_output data and optionally extra info.
set_log_level
(log_level)Set the log level
- __init__(system_frequency: float = 50.0, trafo_loading: str = 'current', log_level: int = 20)
Prepare some member variables
- Args:
system_frequency: fundamental frequency of the alternating current and voltage in the Network measured in Hz
- system_frequency: float
- pp_input_data: MutableMapping[str, DataFrame]
- pgm_input_data: Dict[str, ndarray]
- idx: Dict[Tuple[str, str | None], Series]
- idx_lookup: Dict[Tuple[str, str | None], Series]
- next_idx
- static get_individual_switch_states(component: DataFrame, switches: DataFrame, bus: str) Series
Get the state of an individual switch. Can be open or closed.
- Args:
component: PandaPower dataframe with information about the component that is connected to the switch. Can be a Line dataframe, Transformer dataframe or Three Winding Transformer dataframe.
switches: PandaPower dataframe with information about the switches, has such attributes as: “element”, “bus”, “closed”
bus: name of the bus attribute that the component connects to (e.g “hv_bus”, “from_bus”, “lv_bus”, etc.)
- Returns:
the “closed” value of a Switch
- get_switch_states(pp_table: str) DataFrame
Return switch states of either Lines or Transformers
- Args:
pp_table: Table name (e.g. “bus”)
- Returns:
the switch states of either Lines or Transformers
- get_trafo3w_switch_states(trafo3w: DataFrame) DataFrame
Return switch states of Three Winding Transformers
- Args:
trafo3w: PandaPower dataframe with information about the Three Winding Transformers.
- Returns:
the switch states of Three Winding Transformers
- get_trafo_winding_types() DataFrame
This function extracts Transformers’ “winding_type” attribute through “vector_group” attribut.
- Returns:
the “from” and “to” winding types of a transformer
- get_trafo3w_winding_types() DataFrame
This function extracts Three Winding Transformers’ “winding_type” attribute through “vector_group” attribute.
- Returns:
the three winding types of Three Winding Transformers
- get_id(pp_table: str, pp_idx: int, name: str | None = None) int
Get a numerical ID previously associated with the supplied table / index combination
- Args:
pp_table: Table name (e.g. “bus”) pp_idx: PandaPower component identifier name: Optional component name (e.g. “internal_node”)
- Returns:
The associated id
- lookup_id(pgm_id: int) Dict[str, str | int]
Retrieve the original name / key combination of a pgm object
- Args:
pgm_id: a unique numerical ID
- Returns:
The original table / index combination
Vision Excel Converter: Load data from a Vision Excel export file and use a mapping file to convert the data to PGM
- class power_grid_model_io.converters.vision_excel_converter.IdReferenceFields(nodes_table: str, number: str, node_number: str, sub_number: str)
Data class to store langage specific reference fields.
- nodes_table: str
- number: str
- node_number: str
- sub_number: str
- __init__(nodes_table: str, number: str, node_number: str, sub_number: str) None
- class power_grid_model_io.converters.vision_excel_converter.VisionExcelConverter(source_file: Path | str | None = None, language: str = 'en', terms_changed: dict | None = None, mapping_file: Path | str | None = None, log_level: int = 20)
Vision Excel Converter: Load data from a Vision Excel export file and use a mapping file to convert the data to PGM
Methods
convert
(data[, extra_info])Convert input/update/(a)sym_output data and optionally extra info.
get_appliance_id
(table, node_number, sub_number)Get the automatically assigned id of an appliance (source, load, etc.)
get_branch_id
(table, number)Get the automatically assigned id of a branch (line, transformer, etc.)
get_id
(table, key[, name])Get a the numerical ID previously associated with the supplied name / key combination Args: table: Table name (e.g. "Nodes") key: Component identifier (e.g. {"name": "node1"} or {"number": 1, "sub_number": 2}) name: Optional component name (e.g. "internal_node") Returns: The associated id.
get_ids
(keys[, table, name])Get a the numerical ID previously associated with the supplied name / key combination Args: keys: Component identifiers (e.g. a pandas Dataframe with columns "number" and "sub_number") table: Table name (e.g. "Nodes") name: Optional component name (e.g. "internal_node") Returns: The associated id.
get_log_level
()Get the log level
get_node_id
(number)Get the automatically assigned id of a node
get_virtual_id
(table, obj_name, node_number, ...)Get the automatically assigned id of a virtual object (e.g. the internal node of a 'TransformerLoad').
load_asym_output_data
([data])Load asymmetric output data
load_input_data
([data, make_extra_info])Load input data and extra info
load_sc_output_data
([data])Load sc output data
load_sym_output_data
([data])Load symmetric output data
load_update_data
([data])Load update data
lookup_id
(pgm_id)Retrieve the original name / key combination of a pgm object Args: pgm_id: a unique numerical ID Returns: The original name / key combination
lookup_ids
(pgm_ids)Retrieve the original name / key combination of a list of pgm object Args: pgm_ids: a list of unique numerical ID Returns: A (possibly sparse) pandas dataframe storing all the original reference data
save
(data[, extra_info, destination])Save input/update/(a)sym_output data and optionally extra info.
set_log_level
(log_level)Set the log level
set_mapping
(mapping)Interpret a mapping structure. This includes:
set_mapping_file
(mapping_file)Read, parse and interpret a mapping file. Args: mapping_file: The path to the mapping file.
- __init__(source_file: Path | str | None = None, language: str = 'en', terms_changed: dict | None = None, mapping_file: Path | str | None = None, log_level: int = 20)
Prepare some member variables and optionally load a mapping file
- Args:
mapping_file: A yaml file containing the mapping.
- set_mapping(mapping: Mapping[str, Any]) None
- Interpret a mapping structure. This includes:
the table to table mapping (‘grid’)
the unit conversions (‘units’)
the value substitutions (‘substitutions’) (e.g. enums or other one-on-one value mapping)
column multipliers (‘multipliers’) (e.g. values in a column ending with _kv should be multiplied by 1000.0)
- Args:
mapping: A mapping structure (dictionary):
- get_node_id(number: int) int
Get the automatically assigned id of a node
- get_branch_id(table: str, number: int) int
Get the automatically assigned id of a branch (line, transformer, etc.)
- get_appliance_id(table: str, node_number: int, sub_number: int) int
Get the automatically assigned id of an appliance (source, load, etc.)
- get_virtual_id(table: str, obj_name: str, node_number: int, sub_number: int) int
Get the automatically assigned id of a virtual object (e.g. the internal node of a ‘TransformerLoad’)
data_stores
Abstract data store class
- class power_grid_model_io.data_stores.base_data_store.BaseDataStore
Abstract data store class
Methods
load
()The method that loads the data from one or more sources and returns it in the specified format.
save
(data)The method that saves the data to one or more destinations.
- __init__()
Initialize a logger
- abstract load() T
The method that loads the data from one or more sources and returns it in the specified format. Note that the load() method does not receive a reference to the data source(s); i.e. the data source(s) should be set in the constructor, or in a separate member method.
Returns: Loaded data of type <T>
- abstract save(data: T) None
The method that saves the data to one or more destinations. Note that the save() method does not receive a reference to the data destination(s); i.e. the data destination(s) should be set in the constructor, or in a separate member method.
- Args:
data: Tha data to store shoul dbe of type <T>
Excel File Store
- class power_grid_model_io.data_stores.excel_file_store.ExcelFileStore(file_path: Path | None = None, language: str = 'en', terms_changed: dict | None = None, **extra_paths: Path)
Excel File Store
The first row of each sheet is expected to contain the column names, unless specified differently by an extension of this class. Columns with duplicate names (on the same sheet) are either removed (if they contain exactly the same values) or renamed.
Methods
files
()The files as supplied in the constructor.
load
()Load one or more Excel file as tabular data.
save
(data)Store tabular data as one or more Excel file.
- __init__(file_path: Path | None = None, language: str = 'en', terms_changed: dict | None = None, **extra_paths: Path)
Initialize a logger
- files() Dict[str, Path]
The files as supplied in the constructor. Note that the file names are read-only.
Returns: A copy of the file paths as set in the constructor.
- load() TabularData
Load one or more Excel file as tabular data.
Returns: The contents of all the Excel file supplied in the constructor. The tables of the main file will have no prefix, while the tables of all the extra files will be prefixed with the name of the key word argument as supplied in the constructor.
- save(data: TabularData) None
Store tabular data as one or more Excel file.
- Args:
data: Tha data to store. The keys of the tables will be the names of the spread sheets in the excel files. Table names with a prefix corresponding to the name of the key word argument as supplied in the constructor will be stored in the associated files.
The json file store
- class power_grid_model_io.data_stores.json_file_store.JsonFileStore(file_path: Path)
The json file store expects each json file to be eiter a single dictionary, or a list of dictonaries.
Methods
load
()Loads a JSON file, validates the structure and returns it in a native python data format
save
(data)Saves the native python data format as a JSON file
set_compact
(compact)In compact mode, each object will be output on a single line.
set_indent
(indent)Change the number of spaces used for each indent level (affects output only)
- __init__(file_path: Path)
Initialize a logger
- set_indent(indent: int | None) None
Change the number of spaces used for each indent level (affects output only)
- Args:
indent: Number of spaces for each indent level. None = all json on a single line.
- set_compact(compact: bool) None
In compact mode, each object will be output on a single line. Note that the JsonFileStore is not very general; it assumes that data is either a dictionary of this format:
- {
- “category_0”:
- [
{“attribute_0”: …, “attribute_1”: …, …}, {“attribute_0”: …, “attribute_1”: …, …},
],
- “category_1”:
- [
{“attribute_0”: …, “attribute_1”: …, …}, {“attribute_0”: …, “attribute_1”: …, …},
],
…
}
or a list of those dictionaries.
- Args:
compact: Boolean defining if the output should be stored compact or not
- load() Dict[str, List[Dict[str, Any]]] | List[Dict[str, List[Dict[str, Any]]]]
Loads a JSON file, validates the structure and returns it in a native python data format
Returns: StructuredData
- save(data: Dict[str, List[Dict[str, Any]]] | List[Dict[str, List[Dict[str, Any]]]]) None
Saves the native python data format as a JSON file
- Args:
data: StructuredData
Vision Excel file store
- class power_grid_model_io.data_stores.vision_excel_file_store.VisionExcelFileStore(file_path: Path, language: str = 'en', terms_changed: dict | None = None)
Vision Excel file store
In Vision files, the second row contains information about the unit of measure. Therefore, row 1 (which is row 2 in Excel) is added to the header_rows in the constructor.
Methods
files
()The files as supplied in the constructor.
load
()Load one or more Excel file as tabular data.
save
(data)Store tabular data as one or more Excel file.
- __init__(file_path: Path, language: str = 'en', terms_changed: dict | None = None)
- Args:
file_path: The main Vision Excel export file
data_types
Common data types used in the Power Grid Model project
- power_grid_model_io.data_types._data_types.ExtraInfo
ExtraInfo is information about power grid model objects that are not part of the calculations. E.g. the original ID or name of a node, or the material of a cable (line) etc.
It is a dictionary with numerical keys corresponding to the ids in input_data etc. The values are dictionaries with textual keys. Their values may be anything, but it is advised to use only JSON serializable types like numerical values, strings, lists, dictionaries etc.
- {
- 1: {
“length_km”: 123.4, “material”: “Aluminuminuminum”,
}, 2: {
- “id_reference”: {
“table”: “load”, “name”: “const_power”, “index”: 101
}
}
}
alias of
Dict
[int
,Any
]
- power_grid_model_io.data_types._data_types.ExtraInfoLookup
Legacy type name; use ExtraInfo instead!
alias of
Dict
[int
,Any
]
- power_grid_model_io.data_types._data_types.StructuredData
Structured data is a multi dimensional structure (component_type -> objects -> attribute -> value) or a list of those dictionaries:
- {
- “node”:
- [
{“id”: 0, “u_rated”: 110000.0}, {“id”: 1, “u_rated”: 110000.0},
],
- “line”:
- [
{“id”: 2, “from_node”: 0, “to_node”: 1, “from_status”: 1, “to_status”: 1}
]
- “source”:
- [
{“id”: 3, “node”: 0, “status”: 1, “u_ref”: 1.0}
]
}
alias of
Union
[Dict
[str
,List
[Dict
[str
,Any
]]],List
[Dict
[str
,List
[Dict
[str
,Any
]]]]]
The TabularData class is a wrapper around Dict[str, Union[pd.DataFrame, np.ndarray]], which supports unit conversions and value substitutions
- class power_grid_model_io.data_types.tabular_data.TabularData(logger=None, **tables: DataFrame | ndarray | Callable[[], DataFrame])
The TabularData class is a wrapper around Dict[str, Union[pd.DataFrame, np.ndarray]], which supports unit conversions and value substitutions
Methods
get_column
(table_name, column_name)Select a column from a table, while applying unit conversions and value substitutions
items
()Mimic the dictionary .items() function
keys
()Mimic the dictionary .keys() function
set_substitutions
(substitution)Define value substitutions
set_unit_multipliers
(units)Define unit multipliers.
- __init__(logger=None, **tables: DataFrame | ndarray | Callable[[], DataFrame])
Tabular data can either be a collection of pandas DataFrames and/or numpy structured arrays. The key word arguments will define the keys of the data.
tabular_data = TabularData(foo=foo_data) tabular_data[“foo”] –> foo_data
- Args:
**tables: A collection of pandas DataFrames and/or numpy structured arrays
- set_unit_multipliers(units: UnitMapping) None
Define unit multipliers.
- Args:
units: A UnitMapping object defining all the units and their conversions (e.g. 1 MW = 1_000_000 W)
- set_substitutions(substitution: ValueMapping) None
Define value substitutions
- Args:
substitution: A ValueMapping defining all value substitutions (e.g. “yes” -> 1)
- get_column(table_name: str, column_name: str) Series
Select a column from a table, while applying unit conversions and value substitutions
- Args:
table_name: The name of the table as supplied in the constructor column_name: The name of the column or “index” to get the index
- Returns:
The required column, with unit conversions and value substitutions applied
- keys() Iterable[str]
Mimic the dictionary .keys() function
Returns: An iterator over all table names as supplied in the constructor.
- items() Generator[Tuple[str, DataFrame | ndarray], None, None]
Mimic the dictionary .items() function
Returns: A generator of the table names and the raw table data
functions
These functions can be used in the mapping files to apply functions to tabular data
- power_grid_model_io.functions._functions.has_value(value: Any) bool
Return True if the value is not None, NaN or empty string.
- power_grid_model_io.functions._functions.value_or_default(value: T | None, default: T) T
Return the value, or a default value if no value was supplied.
- power_grid_model_io.functions._functions.value_or_zero(value: float | None) float
Return the value, or a zero value if no value was supplied.
- power_grid_model_io.functions._functions.complex_inverse_real_part(real: float, imag: float) float
Return the real part of the inverse of a complex number
- power_grid_model_io.functions._functions.complex_inverse_imaginary_part(real: float, imag: float) float
Return the imaginary part of the inverse of a complex number
- power_grid_model_io.functions._functions.get_winding(winding: str, neutral_grounding: bool = True) WindingType
Return the winding type as an enum value, based on the string representation
- power_grid_model_io.functions._functions.degrees_to_clock(degrees: float) int
Return the clock
- power_grid_model_io.functions._functions.is_greater_than(left_side, right_side) bool
Return true if the first argument is greater than the second
- power_grid_model_io.functions._functions.both_zeros_to_nan(value: float, other_value: float) float
If both values are zero then return nan otherwise return same value. Truth table (x = value, y = other_value)
0 value nan
0 nan value nan value 0 value nan nan nan value nan
These functions can be used in the mapping files to apply functions to vision data
- power_grid_model_io.functions.phase_to_phase.relative_no_load_current(i_0: float, p_0: float, s_nom: float, u_nom: float) float
Calculate the relative no load current.
- power_grid_model_io.functions.phase_to_phase.reactive_power(p: float, cos_phi: float) float
Calculate the reactive power, based on p, cosine phi.
- power_grid_model_io.functions.phase_to_phase.power_wind_speed(p_nom: float, wind_speed: float, cut_in_wind_speed: float = 3.0, nominal_wind_speed: float = 14.0, cutting_out_wind_speed: float = 25.0, cut_out_wind_speed: float = 30.0, axis_height: float = 30.0) float
Estimate p_ref based on p_nom and wind_speed.
See section “Wind turbine” in https://phasetophase.nl/pdf/VisionEN.pdf
- power_grid_model_io.functions.phase_to_phase.get_winding_from(conn_str: str, neutral_grounding: bool = True) WindingType
Get the winding type, based on a textual encoding of the conn_str
- power_grid_model_io.functions.phase_to_phase.get_winding_to(conn_str: str, neutral_grounding: bool = True) WindingType
Get the winding type, based on a textual encoding of the conn_str
- power_grid_model_io.functions.phase_to_phase.get_winding_1(conn_str: str, neutral_grounding: bool = True) WindingType
Get the winding type, based on a textual encoding of the conn_str
- power_grid_model_io.functions.phase_to_phase.get_winding_2(conn_str: str, neutral_grounding: bool = True) WindingType
Get the winding type, based on a textual encoding of the conn_str
- power_grid_model_io.functions.phase_to_phase.get_winding_3(conn_str: str, neutral_grounding: bool = True) WindingType
Get the winding type, based on a textual encoding of the conn_str
- power_grid_model_io.functions.phase_to_phase.get_clock(conn_str: str) int
Extract the clock part of the conn_str
- power_grid_model_io.functions.phase_to_phase.get_clock_12(conn_str: str) int
Extract the clock part of the conn_str
- power_grid_model_io.functions.phase_to_phase.get_clock_13(conn_str: str) int
Extract the clock part of the conn_str
- power_grid_model_io.functions.phase_to_phase.reactive_power_to_susceptance(q: float, u_nom: float) float
Calculate susceptance, b1 from reactive power Q with nominal voltage
- power_grid_model_io.functions.phase_to_phase.pvs_power_adjustment(p: float, efficiency_type: str) float
Adjust power of PV for the default efficiency type of 97% or 95%. Defaults to 100 % for other custom types
mappings
RegEx attribute based mapping class
- class power_grid_model_io.mappings.field_mapping.FieldMapping(mapping: Dict[str, T] | None = None, logger=None)
RegEx attribute based mapping class
- __init__(mapping: Dict[str, T] | None = None, logger=None)
Field multiplier helper class
- class power_grid_model_io.mappings.multiplier_mapping.MultiplierMapping(mapping: Dict[str, float] | None = None, logger=None)
Field multiplier helper class
Methods
get_multiplier
(attr[, table])Find the multiplier for a given attribute.
- __init__(mapping: Dict[str, float] | None = None, logger=None)
- get_multiplier(attr: str, table: str | None = None) float
Find the multiplier for a given attribute.
Tabular data mapping helper class
- class power_grid_model_io.mappings.tabular_mapping.TabularMapping(mapping: Dict[str, Dict[str, Dict[str, int | float | str | Dict | List] | List[Dict[str, int | float | str | Dict | List]]]], logger=None)
Tabular data mapping helper class
Methods
instances
(table)Return instance definitions (as a generator)
tables
()Return the names of the tables (as a generator)
- __init__(mapping: Dict[str, Dict[str, Dict[str, int | float | str | Dict | List] | List[Dict[str, int | float | str | Dict | List]]]], logger=None)
- tables() Generator[str, None, None]
Return the names of the tables (as a generator)
- Yields:
table_name
- instances(table: str) Generator[Tuple[str, Dict[str, int | float | str | Dict | List]], None, None]
Return instance definitions (as a generator)
- Yields:
component_name, instance_attribute_mapping
Unit mapping helper class
- class power_grid_model_io.mappings.unit_mapping.UnitMapping(mapping: Dict[str, Dict[str, float] | None] | None = None, logger=None)
Unit mapping helper class. The input data is expected to be of the form: {
“A” : None, “W”: {
“kW”: 1000.0, “MW”: 1000000.0
}
}
Methods
get_unit_multiplier
(unit)Find the correct unit multiplier and the corresponding SI unit
set_mapping
(mapping)Creates an internal mapping lookup table based on input data of the form: mapping = { "A" : None, "W": { "kW": 1000.0, "MW": 1000000.0 } }
- __init__(mapping: Dict[str, Dict[str, float] | None] | None = None, logger=None)
- set_mapping(mapping: Dict[str, Dict[str, float] | None])
Creates an internal mapping lookup table based on input data of the form: mapping = {
“A” : None, “W”: {
“kW”: 1000.0, “MW”: 1000000.0
}
}
- get_unit_multiplier(unit: str) Tuple[float, str]
Find the correct unit multiplier and the corresponding SI unit
Value substitution helper class
- class power_grid_model_io.mappings.value_mapping.ValueMapping(mapping: Dict[str, Dict[int | float | str | bool, int | float | str | bool]] | None = None, logger=None)
Value substitution helper class
Methods
get_substitutions
(attr[, table])Find the substitutions for a given attribute.
- __init__(mapping: Dict[str, Dict[int | float | str | bool, int | float | str | bool]] | None = None, logger=None)
- get_substitutions(attr: str, table: str | None = None) Dict[int | float | str | bool, int | float | str | bool]
Find the substitutions for a given attribute.
utils
Automatic ID generator class
- class power_grid_model_io.utils.auto_id.AutoID
Automatic ID generator class
- Usage without items:
auto_id = AutoID() a = auto_id() # a = 0 b = auto_id() # b = 1 c = auto_id() # c = 2 item = auto_id[2] # item = 2
- Usage with hashable items:
auto_id = AutoID() a = auto_id(item=”Alpha”) # a = 0 b = auto_id(item=”Bravo”) # b = 1 c = auto_id(item=”Alpha”) # c = 0 (because key “Alpha” already existed) item = auto_id[1] # item = “Bravo”
- Usage with non-hashable items:
auto_id = AutoID() a = auto_id(item={“name”: “Alpha”}, key=”Alpha”) # a = 0 b = auto_id(item={“name”: “Bravo”}, key=”Bravo”) # b = 1 c = auto_id(item={“name”: “Alpha”}, key=”Alpha”) # c = 0 (because key “Alpha” already existed) item = auto_id[1] # item = {“name”: “Alpha”}
- Note: clashing keys will update the item:
auto_id = AutoID() a = auto_id(item={“name”: “Alpha”}, key=”Alpha”) # a = 0 b = auto_id(item={“name”: “Bravo”}, key=”Bravo”) # b = 1 c = auto_id(item={“name”: “Charly”}, key=”Alpha”) # c = 0 (because key “Alpha” already existed) item = auto_id[0] # item = {“name”: “Charly”}
Methods
__call__
([item, key])Generate a new unique numerical id for the item, or retrieve the previously generated id.
- __init__() None
General dictionary utilities
- power_grid_model_io.utils.dict.merge_dicts(*dictionaries: Dict) Dict
Merge two dictionaries, ignore duplicate key/values Args:
*dictionaries: The dictionaries to be merges
Returns: A (hard copied) combination of all dictionaries
Helper functions to download (and store) files from the internet
The most simple (and intended) usage is: url = “http://141.51.193.167/simbench/gui/usecase/download/?simbench_code=1-complete_data-mixed-all-0-sw&format=csv” zip_file_path = download(url)
It will download the zip file 1-complete_data-mixed-all-0-sw.zip to a folder in you systems temp dir; for example “/tmp/1-complete_data-mixed-all-0-sw.zip”.
Another convenience function is download_and_extract():
csv_dir_path = download_and_extract(url)
This downloads the zip file as described above, and then it extracts the files there as well, in a folder which corresponds to the zip file name (“/tmp/1-complete_data-mixed-all-0-sw/” in our example), and it returns the path to that directory. By default, it will not re-download or re-extract the zip file as long as the files exist in your temp dir. Your temp dir is typically emptied when you reboot your computer.
- class power_grid_model_io.utils.download.ResponseInfo(status: int, file_name: str | None = None, file_size: int | None = None)
Data class to store response information extracted from the response header
- Attributes:
- file_name
- file_size
- status: int
- file_name: str | None = None
- file_size: int | None = None
- __init__(status: int, file_name: str | None = None, file_size: int | None = None) None
- class power_grid_model_io.utils.download.DownloadProgressHook(progress_bar: tqdm)
Report hook for request.urlretrieve() to update a progress bar based on the amount of downloaded blocks
Methods
__call__
(block_num, block_size, file_size)Args:
- __init__(progress_bar: tqdm)
Report hook for request.urlretrieve() to update a progress bar based on the amount of downloaded blocks
- Args:
progress_bar: A tqdm progress bar
- power_grid_model_io.utils.download.download_and_extract(url: str, dir_path: Path | None = None, file_name: Path | str | None = None, overwrite: bool = False) Path
Download a file from a URL and store it locally, extract the contents and return the path to the contents.
- Args:
url: The url to the .zip file dir_path: An optional dir path to store the downloaded file. If no dir_path is given the current working dir
will be used.
- file_name: An optional file name (or path relative to dir_path). If no file_name is given, a file name is
generated based on the url
- overwrite: Should we download the file, even if we have downloaded already (and the file size still matches)?
Be careful with this option, as it will remove files from your drive irreversibly!
- Returns:
The path to the downloaded file
- power_grid_model_io.utils.download.download(url: str, file_name: Path | str | None = None, dir_path: Path | None = None, overwrite: bool = False) Path
Download a file from a URL and store it locally
- Args:
url: The url to the file file_name: An optional file name (or path relative to dir_path). If no file_name is given, a file name is
generated based on the url
- dir_path: An optional dir path to store the downloaded file. If no dir_path is given the current working dir
will be used.
overwrite: Should we download the file, even if we have downloaded already (and the file size still matches)?
- Returns:
The path to the downloaded file
- power_grid_model_io.utils.download.get_response_info(url: str) ResponseInfo
Retrieve the file size of a given URL (based on it’s header)
- Args:
url: The url to the file
- Return:
The file size in bytes
- power_grid_model_io.utils.download.get_download_path(dir_path: Path | None = None, file_name: Path | str | None = None, unique_key: str | None = None) Path
Determine the file path based on dir_path, file_name and/or data
- Args:
- dir_path: An optional dir path to store the downloaded file. If no dir_path is given the system’s temp dir
will be used. If omitted, the tempfolder is used.
- file_name: An optional file name (or path relative to dir_path). If no file_name is given, a file name is
generated based on the unique key (e.g. an url)
unique_key: A unique string that can be used to generate a filename (e.g. a url).
Custom json encoding functionalities
- class power_grid_model_io.utils.json.JsonEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)
Custom JSON encoder for numpy types
Methods
default
(o)Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).encode
(o)Return a JSON string representation of a Python data structure.
iterencode
(o[, _one_shot])Encode the given object and yield each string representation as available.
- default(o)
Implement this method in a subclass such that it returns a serializable object for
o
, or calls the base implementation (to raise aTypeError
).For example, to support arbitrary iterators, you could implement default like this:
def default(self, o): try: iterable = iter(o) except TypeError: pass else: return list(iterable) # Let the base class default method raise the TypeError return JSONEncoder.default(self, o)
- power_grid_model_io.utils.json.compact_json_dump(data: Any, io_stream: IO[str], indent: int, max_level: int, level: int = 0)
Custom compact JSON writer that is intended to put data belonging to a single object on a single line.
For example: {
- “node”: [
{“id”: 0, “u_rated”: 10500.0}, {“id”: 1, “u_rated”: 10500.0},
], “line”: [
{“id”: 2, “node_from”: 0, “node_to”: 1, …}
]
}
The function is being called recursively, starting at level 0 and recursing until max_level is reached. It is basically a full json writer, but for efficiency reasons, on the last levels the native json.dump method is used.
Module utilities, expecially useful for loading optional dependencies
- power_grid_model_io.utils.modules.get_function(fn_name: str) Callable
Get a function pointer by name
Helper function to extract zip files
csv_dir_path = extract(“/tmp/1-complete_data-mixed-all-0-sw.zip”)
This extracts the files, in a folder which corresponds to the zip file name (“/tmp/1-complete_data-mixed-all-0-sw/” in our example), and it returns the path to that directory. By default, it will not re-download or re-extract the zip file as long as the files exist.
- power_grid_model_io.utils.zip.extract(src_file_path: Path, dst_dir_path: Path | None = None, skip_if_exists=False) Path
Extract a .zip file and return the destination dir
- Args:
src_file_path: The .zip file to extract. dst_dir_path: An optional destination path. If none is given, the src_file_path without .zip extension is used. skip_if_exists: Skip existing files, otherwise raise an exception when a file exists.
Returns: The path where the files are extracted