Skip to content

Latest commit

 

History

History
314 lines (251 loc) · 22.2 KB

README.md

File metadata and controls

314 lines (251 loc) · 22.2 KB

Large-scale choice modeling through the lens of machine learning

CI status Linting , formatting, imports sorting: ruff security: bandit Pre-commit

PyPI - Python Version PyPI - Version PyPI - License

DOI cite

Choice-Learn is a Python package designed to help you formulate, estimate, and deploy discrete choice models, e.g., for assortment planning. The package provides ready-to-use datasets and models studied in the academic literature. It also provides a lower level use if you wish to customize the specification of the choice model or formulate your own model from scratch. Choice-Learn efficiently handles large-scale choice data by limiting RAM usage.

Choice-Learn uses NumPy and pandas as data backend engines and TensorFlow for models.

🔱 Table of Contents

🔱 Introduction - Discrete Choice modeling

Discrete choice models aim at explaining or predicting choices over a set of alternatives. Well known use-cases include analyzing people's choice of mean of transport or products purchases in stores.

If you are new to choice modeling, you can check this resource. The different notebooks from the Getting Started section can also help you understand choice modeling and more importantly help you for your usecase.

🔱 What's in there ?

Data

  • The ChoiceDataset class can handle choice datasets with efficient memory management. It can be used on your own dataset. [Example]
  • Many academic datasets are integrated in the library and ready to be used:
Dataset Raw Data Origin from choice_learn.datasets import Doc
SwissMetro csv Bierlaire et al. (2001) [2]  load_swissmetro #
ModeCanada csv Forinash and Koppelman (1993) [3] load_modecanada #
Train csv Ben-Akiva et al. (1993) [5] load_train #
Heating csv Kenneth Train's website load_heating #
HC csv Kenneth Train's website load_hc #
Electricity csv Kenneth Train's website load_electricity #
Stated Car Preferences csv McFadden and Train (2000) [9] load_car_preferences #
TaFeng Grocery Dataset csv Kaggle load_tafeng #
ICDM-2013 Expedia url Ben Hamner and Friedman (2013) [6] load_expedia #
London Passenger Mode Choice url Hillel et al. (2018) [11] load_londonpassenger #

Model estimation

  • Different models are already implemented. You can import and parametrize the models for your own usage.
  • Otherwise, custom modeling is made easy by subclassing the ChoiceModel class and specifying your own utility function. [Example]

List of implemented & ready-to-use models:

Model Example Colab Related Paper from choice_learn.models import Doc
MNL notebook Open In Colab SimpleMNL #
Conditional Logit notebook Open In Colab Train et al. [4]          ConditionalLogit #
Nested Logit notebook Open In Colab McFadden [10] NestedLogit #
Latent Class MNL notebook Open In Colab LatentClassConditionalLogit # 
NN-based Model Example Colab Related Paper from choice_learn.models import Doc
RUMnet notebook Open In Colab Aouad and Désir [1]  RUMnet #
TasteNet notebook Open In Colab Han et al. [7] TasteNet #
Learning-MNL notebook Open In Colab Sifringer et al. [13] LearningMNL #
ResLogit notebook Open In Colab Wong and Farooq [12] ResLogit #

Auxiliary tools

Algorithms leveraging choice models are integrated within the library:

  • Assortment & Pricing optimization algorithms [Example] [8] Open In Colab

🔱 Getting Started

You can find the following tutorials to help you getting started with the package:

  • Generic and simple introduction [notebook][doc] Open In Colab
  • Detailed explanations of data handling depending on the data format [noteboook][doc] Open In Colab
  • A detailed example of conditional logit estimation [notebook][doc] Open In Colab
  • Introduction to custom modeling and more complex parametrization [notebook][doc] Open In Colab
  • All models and algorithms have a companion example in the notebook directory

🔱 Installation

User installation

To install the required packages in a virtual environment, run the following command:

The easiest is to pip-install the package:

pip install choice-learn

Otherwise you can use the git repository to get the latest version:

git clone [email protected]:artefactory/choice-learn.git

Dependencies

For manual installation, Choice-Learn requires the following:

  • Python (>=3.9, <3.13)
  • NumPy (>=1.24)
  • pandas (>=1.5)

For modeling you need:

  • TensorFlow (>=2.14, <2.17)

⚠️ Warning: If you are a MAC user with a M1 or M2 chip, importing TensorFlow might lead to Python crashing. In such case, use anaconda to install TensorFlow with conda install -c apple tensorflow.

An optional requirement used for coefficients analysis and L-BFGS optimization is:

  • TensorFlow Probability (>=0.22)

Finally for pricing or assortment optimization, you need either Gurobi or OR-Tools:

  • gurobipy (>=11.0)
  • ortools (>=9.6)

               

💡 Tip: You can use the poetry.lock or requirements-complete.txt files with poetry or pip to install a fully predetermined and working environment.

🔱 Usage

Here is a short example of model parametrization to estimate a Conditional Logit on the ModeCanada dataset.

from choice_learn.data import ChoiceDataset
from choice_learn.models import ConditionalLogit, RUMnet
from choice_learn.datasets import load_modecanada

transport_df = load_modecanada(as_frame=True)
# Instantiation of a ChoiceDataset from a pandas.DataFrame
dataset = ChoiceDataset.from_single_long_df(df=transport_df,
                                            items_id_column="alt",
                                            choices_id_column="case",
                                            choices_column="choice",
                                            shared_features_columns=["income"],
                                            items_features_columns=["cost", "freq", "ovt", "ivt"],
                                            choice_format="one_zero")

# Initialization of the model
model = ConditionalLogit()

# Creation of the different weights:

# add_coefficients adds one coefficient for each specified item_index
# intercept, and income are added for each item except the first one that needs to be zeroed
model.add_coefficients(feature_name="intercept",
                       items_indexes=[1, 2, 3])
model.add_coefficients(feature_name="income",
                       items_indexes=[1, 2, 3])
model.add_coefficients(feature_name="ivt",
                       items_indexes=[0, 1, 2, 3])

# add_shared_coefficient add one coefficient that is used for all items specified in the items_indexes:
# Here, cost, freq and ovt coefficients are shared between all items
model.add_shared_coefficient(feature_name="cost",
                             items_indexes=[0, 1, 2, 3])
model.add_shared_coefficient(feature_name="freq",
                             items_indexes=[0, 1, 2, 3])
model.add_shared_coefficient(feature_name="ovt",
                             items_indexes=[0, 1, 2, 3])

history = model.fit(dataset, get_report=True)
print("The average neg-loglikelihood is:", model.evaluate(dataset).numpy())
print(model.report)

🔱 Documentation

A detailed documentation of this project is available here.
TensorFlow also has extensive documentation that can help you.
An academic paper has been published in the Journal of Open-Source Software, here.

🔱 Contributing

You are welcome to contribute to the project ! You can help in various ways:

  • raise issues
  • resolve issues already opened
  • develop new features
  • provide additional examples of use
  • fix typos, improve code quality
  • develop new tests

We recommend to first open an issue to discuss your ideas. More details are given here.

🔱 Citation

If you consider this package and any of its feature useful for your research, consider citing our paper.

@article{Auriau2024,
  doi = {10.21105/joss.06899},
  url = {https://doi.org/10.21105/joss.06899},
  year = {2024},
  publisher = {The Open Journal},
  volume = {9},
  number = {101},
  pages = {6899},
  author = {Vincent Auriau and Ali Aouad and Antoine Désir and Emmanuel Malherbe},
  title = {Choice-Learn: Large-scale choice modeling for operational contexts through the lens of machine learning},
  journal = {Journal of Open Source Software} }

License

The use of this software is under the MIT license, with no limitation of usage, including for commercial applications.

Affiliations

Choice-Learn has been developed through a collaboration between researchers at the Artefact Research Center and the laboratory MICS from CentraleSupélec, Université Paris Saclay.

   

           

🔱 References

Papers

[1]Representing Random Utility Choice Models with Neural Networks, Aouad, A.; Désir, A. (2022)
[2]The Acceptance of Model Innovation: The Case of Swissmetro, Bierlaire, M.; Axhausen, K., W.; Abay, G. (2001)
[3]Applications and Interpretation of Nested Logit Models of Intercity Mode Choice, Forinash, C., V.; Koppelman, F., S. (1993)
[4]The Demand for Local Telephone Service: A Fully Discrete Model of Residential Calling Patterns and Service Choices, Train K., E.; McFadden, D., L.; Moshe, B. (1987)
[5] Estimation of Travel Choice Models with Randomly Distributed Values of Time, Ben-Akiva, M.; Bolduc, D.; Bradley, M. (1993)
[6] Personalize Expedia Hotel Searches - ICDM 2013, Ben Hamner, A.; Friedman, D.; SSA_Expedia. (2013)
[7] A Neural-embedded Discrete Choice Model: Learning Taste Representation with Strengthened Interpretability, Han, Y.; Calara Oereuran F.; Ben-Akiva, M.; Zegras, C. (2020)
[8] A branch-and-cut algorithm for the latent-class logit assortment problem, Méndez-Díaz, I.; Miranda-Bront, J. J.; Vulcano, G.; Zabala, P. (2014)
[9] Stated Preferences for Car Choice in Mixed MNL models for discrete response., McFadden, D. and Kenneth Train (2000)
[10] Modeling the Choice of Residential Location, McFadden, D. (1978)
[11] Recreating passenger mode choice-sets for transport simulation: A case study of London, UK, Hillel, T.; Elshafie, M. Z. E. B.; Jin, Y. (2018)
[12] ResLogit: A residual neural network logit model for data-driven choice modelling, Wong, M.; Farooq, B. (2021)
[13] Enhancing Discrete Choice Models with Representation Learning, Sifringer, B.; Lurkin, V.; Alahi, A. (2018)

Code and Repositories

Official models implementations:

[1] RUMnet
[7] TasteNet [Repo1] [Repo2]
[12] ResLogit
[13] Learning-MNL