Michel Bierlaire
Michel Bierlaire
  • 162
  • 717 314
Estimating your first model with Biogeme 3.2.11
This video describes how to estimate your first choice model with Biogeme.
The notebook is available here: github.com/michelbierlaire/biogeme/blob/master/examples/swissmetro/First%20example.ipynb
Переглядів: 1 665

Відео

Preparing data for Biogeme 3.2.11
Переглядів 2 тис.Рік тому
This video describes how to prepare the data in order to be used for the specification of choice models, to be estimated with Biogeme.
Installing Biogeme 3.2.11 on Mac OSX
Переглядів 761Рік тому
This video walks you through the installation of what you need to run Biogeme on Mac OSX: Python, a virtual environment, a development environment, and Biogeme itself.
Installing Biogeme 3.2.11 on Windows
Переглядів 2,2 тис.Рік тому
This video walks you through the installation of what you need to run Biogeme on Windows: Python, a virtual environment, a development environment, and Biogeme itself.
Optimization and simulation. Introduction.
Переглядів 2,8 тис.2 роки тому
Lecture for the PhD course "Optimization and Simulation", EPFL. Related videos: ua-cam.com/play/PL10NOnsbP5Q5NlJ-Y6Eiup6RTSfkuj1TR.html
Multivariate Extreme Value Models
Переглядів 1,5 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Discrete choice and machine learning: two complementary methodologies (part 2)
Переглядів 1,6 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Discrete choice and machine learning: two complementary methodologies (part 1)
Переглядів 1,4 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Panel data: dynamic model with panel effects
Переглядів 7062 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Panel data: dynamic model
Переглядів 8662 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Panel data: serial correlation
Переглядів 1,5 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Panel data: static model
Переглядів 1,3 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Choice models with latent variables: case study
Переглядів 1,7 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Choice models with latent variables: Modeling latent concepts (part 2)
Переглядів 1,7 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Choice models with latent variables: Modeling latent concepts (part 1)
Переглядів 2,4 тис.2 роки тому
Lecture from the MOOC "Discrete choice models: selected topics"
Choice models with latent variables: Beyond rationality
Переглядів 8262 роки тому
Choice models with latent variables: Beyond rationality
Mixture models: summary
Переглядів 3392 роки тому
Mixture models: summary
Mixture models: individual level parameters
Переглядів 4032 роки тому
Mixture models: individual level parameters
Mixture models: latent classes
Переглядів 2 тис.2 роки тому
Mixture models: latent classes
Mixture models: taste heterogeneity
Переглядів 8393 роки тому
Mixture models: taste heterogeneity
Mixture models: alternative specific variance
Переглядів 6073 роки тому
Mixture models: alternative specific variance
Mixture models: nesting structures
Переглядів 6103 роки тому
Mixture models: nesting structures
Monte-Carlo integration (part 2)
Переглядів 2 тис.3 роки тому
Monte-Carlo integration (part 2)
Monte-Carlo integration (part 1)
Переглядів 8 тис.3 роки тому
Monte-Carlo integration (part 1)
Mixtures: introduction
Переглядів 7213 роки тому
Mixtures: introduction
Sampling: weighted exogenous maximum likelihood estimation
Переглядів 6243 роки тому
Sampling: weighted exogenous maximum likelihood estimation
Sampling: conditional maximum likelihood estimation
Переглядів 2,2 тис.3 роки тому
Sampling: conditional maximum likelihood estimation
Sampling: maximum likelihood estimation
Переглядів 7963 роки тому
Sampling: maximum likelihood estimation
Sampling strategies: an example
Переглядів 4253 роки тому
Sampling strategies: an example
Sampling strategies
Переглядів 1,2 тис.3 роки тому
Sampling strategies

КОМЕНТАРІ

  • @briceathey2744
    @briceathey2744 2 дні тому

    vive la suisse !

  • @sharshabillian
    @sharshabillian 4 дні тому

    Many thanks for taking the time to share your knowledge so articulately.

  • @pnachtwey
    @pnachtwey 27 днів тому

    I would like to see a real example. In my case the line search begins from the current point in the opposite direction of the gradient. I must search along the line. I can just repeatably iterate along the line and evaluate the cost function until it no longer gets smaller. You are suggesting searching between two points but how far should the end point or 'low" point be from the current "high" point. You are assuming the boundary of the end point is known and will bracket the minimum point along the line search. What if it isn't?

  • @evstigneevnm
    @evstigneevnm Місяць тому

    Dear sir, Thank you for your series. I was doing some research in the field of trust region updates and found you video series. I have a question on this video, though. Let's say we are working in reals. By definition, a positive definite matrix is a SYMMETRIC matrix M \in R^N \times R^N s.t. x*Mx >0 \forall x \in R^N and ||x||>0. At 04:04 it is said, that the matrix D_k = inv(A + \tau I) and \tau \in R is such, that D_k is positive definite. Am I correct that such method will not work if A is not symmetric? Is there a remedy for that case? I know about using a diagonal or symmetrization of the matrix (M = 1/2(A+A*)). But are there any other good suggestions, apart from trust region type methods, especially if the Newton's method is applied to find a solution to the problem F(x) = 0, not the minimization problem? Thank you for your time in advance.

  • @miqomargaryan15
    @miqomargaryan15 Місяць тому

    kroasan

  • @Abu_khalid1
    @Abu_khalid1 Місяць тому

    Thank you Sir

  • @arminmakani7471
    @arminmakani7471 Місяць тому

    I appreciate you deeply due to your amazing teaching.

  • @arminmakani7471
    @arminmakani7471 Місяць тому

    great

  • @arminmakani7471
    @arminmakani7471 Місяць тому

    Wooooooooooooooooooooow. Dear Professor, your explanations are brilliant and helpful. I also downloaded your book. Thank you so much for your services.

  • @gobichai2704
    @gobichai2704 2 місяці тому

    you saved my life!

  • @avk8477
    @avk8477 2 місяці тому

    Extremely concise and lucid explanation. Thank you Prof. Michel.

  • @friegglb3846
    @friegglb3846 2 місяці тому

    Dear Prof, is there any reason we set the upper bound for the nest parameter as 10?

  • @achrafBadiry
    @achrafBadiry 2 місяці тому

    love the french accent. cheers !

  • @TauvicRitter
    @TauvicRitter 2 місяці тому

    Don't understand customer behaviour. Customers dont go to a bar when it is closed. And i guess just leave when service takes too long.

  • @mircosoffritti6484
    @mircosoffritti6484 2 місяці тому

    Cristal clear

  • @NeoxX317
    @NeoxX317 3 місяці тому

    Vous êtes une référence à mes yeux ! Je connais votre chaîne depuis la fac et désormais je travaille dans un projet autour de la MDA / MDO et vos vidéos sont d’une grande aide, merci !

  • @amareyaekob3343
    @amareyaekob3343 3 місяці тому

    Dear Michel, Thank you so much for this insightful video. Can you make a video demonstrating choice modeling with latent variables in SPSS, Stata or other softwares? That would help us figure out how to integrate SEM with discrete choice modeling in the form of Structural choice modeling. Thank you so much again

  • @marlonbrando6826
    @marlonbrando6826 3 місяці тому

    Why are the gradients not perpendicular to the level sets in 2:23?

  • @nayeemislam8123
    @nayeemislam8123 3 місяці тому

    The video Survival of the fittest is not available on UA-cam anymore.

  • @hannukoistinen5329
    @hannukoistinen5329 3 місяці тому

    Well...math is not the strongest area of the French:). Wines and good food maybe.

  • @AkablaaTribe
    @AkablaaTribe 3 місяці тому

    Dr. Bierlaire you are the best. I have been involved with SP studies for the past 34 years from Park and Ride , LRT , BRT , early or late start time ( peak spreading) , risk averse propensity at signalised junctions. having conducted over 30K SP surveys myself on the past 34 years I always have had questions and never found transparent answers concerning theory and estimation but you are a super start who explains leaving nothing un-answered. A Big Thanks

  • @ZenjobBuddyJensJeremies
    @ZenjobBuddyJensJeremies 4 місяці тому

    Merci beaucoup !

  • @operitivo4635
    @operitivo4635 4 місяці тому

    thank you for the tutorial!

  • @jossec1344
    @jossec1344 4 місяці тому

    Hi Sir, thank you for your amazing videos. At 15:53 it might be a mistake in the slide as you show that the strata are define by x and i simultaneously, but it is written ESS which is a pure choice-based strategy (so only based in i). Is it not XESS there ? Thank you again for all your great work!

  • @user-el4rc4xz5w
    @user-el4rc4xz5w 4 місяці тому

    very good and intuitive explanation. Thank you so much sir! you make my learning a really wonderful experience

  • @SepsOfficial
    @SepsOfficial 5 місяців тому

    Thankyou.

  • @SepsOfficial
    @SepsOfficial 5 місяців тому

    Where is the part 3?

  • @StudyJuly
    @StudyJuly 5 місяців тому

    Thank you so much. So well explained, exactly what I needed!

  • @user-ul4bo2dv5q
    @user-ul4bo2dv5q 5 місяців тому

    Why I could not install biogeme through anaconda?

  • @raideno56
    @raideno56 5 місяців тому

    Monsieur a 2:38 nous avions les couts reduits de c6 = -1.25 et celui de c5 = -0.75, c'etait pas plus interessant de choisir c6 comme variable entrante vu qu'elle diminuera la fonction objectif plus que c5 ? Merci.

  • @raideno56
    @raideno56 5 місяців тому

    Hey please at 3:27 i didn't understand why N * dn is the same thing as the SUM of (Aj * dj) with j from m + 1 to n

    • @ytenergy444
      @ytenergy444 4 місяці тому

      N is mx(n-m) matrix; d_N is a (n-m) column vector hence their product is a (m) column vector. Now you can see the resulting (m) dimensional column vector as expressed by a linear combination of the columns of N where the coefficients of the linear combinations are the entries of d_N, which are the d_j in the summation. The columns of N, are indicated as A_j with j going from m+1 to n (remember that N is the part of A that is not B where B is (mxm)) and the entries of d_N are indicated as d_j (these are numbers). In the example, the idea is to choose only one non basic variable and to set it to 1, which is the k-th one. For this reason, the summation boils down to just the extraction of the k-th column of A. Hope this helps!

  • @senzhan221
    @senzhan221 5 місяців тому

    using directional derivative to explain reduced cost is really intuitive!

  • @muhammedteshome
    @muhammedteshome 5 місяців тому

    not work on my pc how to correct it

  • @jossec1344
    @jossec1344 5 місяців тому

    Hi Sir, thank you for all your fantastic videos. How do you solve the system of equation at 6:39 to get the vector of probabilities to be in (perfect condition, partially damaged, seriously damaged, completely useless) = (5/8 , 1/4 , 3/32 , 1/32) ? Thank you very much

  • @hibaezzahi1603
    @hibaezzahi1603 5 місяців тому

    pourquoi on a choisi x2 = 2,5 et non pas x1 = 1,5

  • @mohammadhiasat2544
    @mohammadhiasat2544 7 місяців тому

    where can I get a dataset like you use in this system ?

  • @vishalmahajan9732
    @vishalmahajan9732 8 місяців тому

    Thank you for sharing, Professor.

  • @tarfasajale9463
    @tarfasajale9463 9 місяців тому

    Dear professor Michel,why am I unable to get the output for my model correct?Any help is greatly appreciated.Please see my code snippet. The model syntax # Import necessary libraries import pandas as pd import biogeme.biogeme as bio from biogeme.expressions import Beta import numpy as np import biogeme.models as models import biogeme.database as db import biogeme.version as ver # Load and reshape the dataset df = pd.read_excel('My_file’.xlsx') # Create a Biogeme database database = db.Database('My_file', df) globals().update(database.variables) ## Parameters to be estimated ASC_1 = Beta('ASC_1',0,None,None,0) ASC_2 = Beta('ASC_2',0,None,None,0) ASC_3 = Beta('ASC_3',0,None,None,1) B_Wsup = Beta('B_Wsup',0,-10000,10000,0) B_Wqua = Beta('B_Wqua',0,-10000,10000,0) B_Price = Beta('B_Price',0,-10000,10000,0) Specification of utility functions(the choice design is generic/unlabelled and hence ASC_1 and ASC_2 are expected to be similar or same, also ASC_3 corresponds to the third alternative which is the status quo in the choice alternatives).Also tried without lower and upper bound setting or setting equal to 0(zero) but the result didn't change. V1 = ASC_1 +\ B_Wsup*Wsup+\ B_Wqua*Wqua+\ B_Price*Price V2 = ASC_2 +\ B_Wsup*Wsup+\ B_Wqua*Wqua+\ B_Price*Price V3 = ASC_3 +\ B_Wsup*Wsup+\ B_Wqua*Wqua+\ B_Price*Price Association of alternatives with the utility functions V = {1:V1, 2:V2, 3:V3} Associate the availability conditions with the alternatives (in the dataframe Concept column is for availability ie Concept =1 for alternative 1,Concept = 2,for alternative 2 and Concept = 3 for the alternative 3,which is the status quo) av = {1: database.variables['Concept'] == 1, 2: database.variables['Concept'] == 2, 3: database.variables['Concept'] == 3} define the contribution to the loglikelihood of each observation (Chosen =1 if alternative 1 is chosen,=2 if alternative 2 is chosen and =3 if alternative 3 is chosen) logprob = models.loglogit(V,av,Chosen) Biogeme biogeme = bio.BIOGEME(database,logprob) biogeme.modelName = '01logit' Running the estimation results = biogeme.estimate() Get the result pandasResults = results.getEstimatedParameters() pandasResults Value Rob. Std err Rob. t-test Rob. p-value ASC_1 1.77649 0.0 1.797693e+308 0.0 ASC_2 0.00000 0.0 1.797693e+308 0.0 B_Price 0.00000 0.0 1.797693e+308 0.0 B_Wqua 0.00000 0.0 1.797693e+308 0.0 B_Wsup 0.00000 0.0 1.797693e+308 0.0

  • @BilalTaskin-om6il
    @BilalTaskin-om6il 9 місяців тому

    Thank you so much.

  • @alexis91459
    @alexis91459 10 місяців тому

    Really informative. I just don"t understand why the noise at @20:35 follows a normal distribution?

  • @mkay314
    @mkay314 10 місяців тому

    Does the line search algorithm described starting at 1:11 satisfy only the weak Wolfe condition on the curvature but not the strong condition (the one with absolute values)? Is there an easy modification of the algorithm if I would like to satisfy the strong condition on the curvature?

  • @kamoldebnath9629
    @kamoldebnath9629 10 місяців тому

    Oh my god! Thank you so much for making the video.

  • @tarabalam9962
    @tarabalam9962 10 місяців тому

    thank you for explaining the concept in simple terms. Really good video

  • @DataScience-py1pe
    @DataScience-py1pe 11 місяців тому

    Merci

  • @tuongnguyen9391
    @tuongnguyen9391 Рік тому

    Thank you from Vietnam !

  • @riccardoscarpa6417
    @riccardoscarpa6417 Рік тому

    great presentation, as usual... thank you Michel!

  • @eliottboublil7796
    @eliottboublil7796 Рік тому

    Bonjour monsieur, excellente vidéo mais je voulais simplement savoir pourquoi comme B est inversible alos B=B^(-1). Merci à vous.

  • @Bartholinn
    @Bartholinn Рік тому

    Amazing explanation sir. Thank you.

  • @shadowhell5289
    @shadowhell5289 Рік тому

    great videos professor

  • @user-ks6iz3ch6s
    @user-ks6iz3ch6s Рік тому

    good