pidge helps with the creation of mappings for tabular string data. The primary use cases for this are data cleaning and data categorization.
pidge consists out of two parts:
- An interactive UI to help with the creation of mappings and assessing their completeness
- Library functionality to apply mappings inside of data pipelines, after they have been exported from the UI
-
install
pidge
pip install pidge
-
Launch the UI in a notebook
from pidge import pidge_ui import panel as pn pn.extensions('tabulator','jsoneditor') pidge_ui(my_input_dataframe)
-
Create and export a mapping named
pidge_mapping.json
-
In your data pipeline import
pidge
and apply the mappingfrom pidge import pidge transformed_data = pidge(my_input_dataframe,rule_file= 'pidge_mapping.json')
Pidge can also run the UI as a standalone web server outside of jupyter, using the command.
python -m pidge
This starts up the UI in a local web server, which is primarily intended for illustration purposes.
Therefore it starts up with example data already loaded. However new data can be loaded and the
predefined rules can easily be reset. The main limitation at this moment, however, is the
constraint on the upload format for data. It only supports .csv
and reads it with default
pandas.read_csv
settings.
Pidge mappings map a source string column to a newly created target string column. The logic can be broken down as follows.
- One defines a possible value, a category, for the target column.
- One associates one or more patterns with that category.
- When a value of the source column matches one of the category's patterns, that category is chosen.
- Pattern matching checks whether the pattern is part of the source string. It is case insensitive and allows for regular expressions.
- This is repeated for as many categories as desired.
Pidge is in an early MVP stage. At this stage the following is particularly appreciated
- Any feedback regarding, bugs, issues usability feature requests etc. Ideally this can be done directly as github issues.
- Any sharing of the project to potentially help with the previous point.
There are a few known limitations particularly due to the MVP stage of the project. These will be prioritized according to feedback and general usage of the project.
- Mapping is not optimized for speed at all and might slow down for large dataframes
- Patterns do not check for multiple inconsistent matches and simply the first applicable pattern is chosen
- the web-ui does only support small file uploads (around < 10Mb).
- file uploads can only read in the data with
pandas.read_csv
default settings - The rule name used for the .json export can currently not be changed in the UI.