The impossibility of recognizing the Data Generating Process leads to potential uncertainties in model specification that are often ignored in common selection methodologies: the model averaging approach is a promising alternative which directly deals with this issue by simply considering a quantity of interest across the model space, with the obvious advantage of avoiding possible miss-specifications or wrong conclusions induced by the choice of a single ``best'' model. From a practical viewpoint, model averaging belongs to both the Frequentist and the Bayesian framework, but in the present dissertation, I will follow the second one due to its major flexibility and potentiality: in particular, a Bayesian Model Averaging (BMA) scheme based on Markov Chain Monte Carlo (MCMC) simulations, which jointly samples parameters and models, is proposed for Generalized Linear Models. The interest in Generalized Linear Models finds its motivation in Microeconometrics where, for instance, binary choice models are in common use and where the application of such techniques is still an unexplored field. A software implementation in Gretl, via a package of functions, is then provided, having care of addressing new computational challenges, mainly the parallelization of the process via MPI (Message Passing Interface): parallelization is somehow a typical routine in standard Monte Carlo experiments, where, due to the independence of the sampling procedure, the maximal benefit in terms of CPU time gain could be achieved; however the same is not so straightforward in MCMC experiments. I will show how a simple application of parallelization in MCMCs is still useful, both in terms of a better exploration of model and parameter space and in terms of time savings. Finally, the afore-mentioned Gretl package opens the possibility of an automated procedure ready for use for the common user too, with the implicit recommendation of ``reading carefully the package leaflet''. The economic implications of model averaging are explored in a Treatment Evaluation problem with Propensity Score matching: model averaging is commonly used in forecasting problems with linear models, where the driving-idea of producing an estimate which balances the ones of each specification (properly weighted) leads to more robustness with respect to ``guessing'' any single one. In Propensity Score evaluation the choice of which variables should be included is often ignored, and the consequent specification could be determinant in the final treatment effect estimation; a similar argument can be, therefore, applied: could model averaging be profitable in the Propensity Score definition instead of guessing which variable should be included? I will investigate, as empirical illustration, the economic effect of tax rebates on consumption, using as case study the 2014 Italian income tax rebate, which introduced an increase in individual monthly salary of 80 euro to employees. A dataset from the ``Survey on Household Income and Wealth'' (SHIW) held by the Bank of Italy is built and three different techniques concerning model averaging are compared: a first one, which uses the model averaged posterior mean of the parameter of interest to build the Propensity Score, and then performs matching and treatment evaluation in the Frequentist manner; a second one averages the Frequentist treatment effects across the different models recognized by the BMA procedure, using as weight the posterior model distributions; and finally, a fully Bayesian procedure in which each sampled parameter by the BMA procedure defines a propensity score used to derive a model specific treatment effect, which is then aggregated according to the model probabilities. Matching is performed via pairwise nearest neighborhood under different caliper and different data order: the last two BMA methodologies balance the estimates between different specifications, leading to a treatment evaluation less affected by the discretion in the choice of which variables include in the Propensity Score definition; the first proposed BMA technique, instead, does not always guarantee this condition. Moreover, it manifests a higher variability across different matching set-ups, as opposed to the fully Bayesian method which shows the highest robustness taking into account an additional source of uncertainty: the propensity score distribution.
L’impossibilità di definire con precisione il Data Generating Process, ovvero quella legge che genera i dati, induce spesso a scegliere modelli statistico-econometrici senza tener conto dell’incertezza insita in questa scelta: usare tecniche di model selection porta a condizionare l’inferenza ad un unico modello ``migliore’’, ma chiaramente nulla proibisce che questo modello di fatto non lo sia. Un approccio che ovvia a questi inconvenienti è quello del model averaging, che semplicemente porta a considerare una quantità d’interesse (uno stimatore ad esempio) come media della stessa su tutti i modelli considerati, pesati per la loro probabilità di essere veritieri. La tecnica del model averaging può essere applicata secondo un’ottica Frequentista o Bayesiana: nel seguente lavoro, in particolare, la scelta ricade sulla seconda grazie alla maggiore flessibilità e potenzialità sia in termini interpretativi che statistici. Il fulcro centrale tuttavia è quello di applicare una tecnica di Bayesian Model Averaging (BMA) a quella famiglia di modelli costituita dai Generalized Linear Models (GLMs), in quanto estremamente comuni nelle applicazioni microeconometriche, un campo di prova ancora grandemente inesplorato per il model averaging. Il framework classico del BMA, tuttavia, risulta inappropriato per suddetti modelli, per cui è richiesta una sua evoluzione: la soluzione proposta, in particolare, sfrutta lo schema del Reversible Jump Markov Chain Monte Carlo (RJMCMC). Questo conduce all’idea originaria della tesi: un algoritmo di Gretl che possa implementare il meccanismo dei RJMCMC per il model averaging, rendono accessibile una procedura generalmente complessa anche all’utilizzatore comune. Da notare è l’attenzione dedicata a diversi aspetti computazionali, primo fra tutti la parallelizzazione della procedura per garantire non solo un’analisi più approfondita delle stime e dei modelli considerati, ma anche, e sopratutto, un netto miglioramento dei tempi computazionali. Attualmente la maggior parte dei software disponibili per model averaging è unicamente inteso per modelli lineari, e quelli che estendono il framework ai GLMs usano abbondantemente approssimazioni, per cui l’algoritmo qui proposto introdurrebbe un meccanismo più completo e consono con l’ulteriore innovazione dovuta alla già citata parallelizzazione, un altro aspetto non implementato negli algoritmi concorrenti. L’applicazione pratica, qui proposta, è poi rivolta ad un altro ambito ove le tecniche di model averaging sono ancora poco conosciute e poco usate: il Propensity Score matching. Spesso, la scelta delle variabili in questo genere di problemi è considerata secondaria, ma in realtà tale aspetto si rivela essere determinante, per cui introdurre incertezza nella scelta delle variabili e tenerne conto mediante le tecniche di model averaging potrebbe essere un aspetto cruciale. L’applicazione qui proposta mira a studiare l'impatto sui consumi del bonus fiscale di 80 euro introdotto in Italia dal decreto legge 66/2014 e come vedremo la sua analisi rispecchia perfettamente l'idea di incertezza che si vuole valutare. L'approccio usato è basato sul propensity score difference-in-difference applicato ad un dataset costruito a partire dal ``Survey on Household Income and Wealth'' (SHIW) della Banca d'Italia. Un primo esame mostra come la scelta delle variabili nella definizione del Propensity Score sia determinante nella valutazione finale della politica, e conseguentemente tre diverse tecniche relative al model averaging usando il precedentemente definito algoritmo sono proposte: una prima costruisce il propensity score non su stime frequentiste, ma sulla media della distribuzione a posteriori definita su tutti i modelli, la seconda e la terza invece mediano le stime degli effetti modello per modello con la sola distinzione di essere una più Frequentista nello spirito ed una prettamente Bayesiana. Il matching viene eseguito con il nearest neighborhood e diversi caliper e ordini di dati sono usati: la conclusione è che le tecniche che mediano direttamente gli effetti risultano più robuste rispetto alla prima alternativa, in particolare la tecnica prettamente Bayesiana presenta la maggiore robustezza nei risultati aprendo le porte ad una plausibile tecnica che riduca l’arbitrarietà nella scelta delle variabili e nelle tecniche di matching.
A Multi-parallel BMA approach for Generalized Linear Models in Gretl / Pedini, Luca. - (2019 Mar 14).
A Multi-parallel BMA approach for Generalized Linear Models in Gretl
PEDINI, LUCA
2019-03-14
Abstract
The impossibility of recognizing the Data Generating Process leads to potential uncertainties in model specification that are often ignored in common selection methodologies: the model averaging approach is a promising alternative which directly deals with this issue by simply considering a quantity of interest across the model space, with the obvious advantage of avoiding possible miss-specifications or wrong conclusions induced by the choice of a single ``best'' model. From a practical viewpoint, model averaging belongs to both the Frequentist and the Bayesian framework, but in the present dissertation, I will follow the second one due to its major flexibility and potentiality: in particular, a Bayesian Model Averaging (BMA) scheme based on Markov Chain Monte Carlo (MCMC) simulations, which jointly samples parameters and models, is proposed for Generalized Linear Models. The interest in Generalized Linear Models finds its motivation in Microeconometrics where, for instance, binary choice models are in common use and where the application of such techniques is still an unexplored field. A software implementation in Gretl, via a package of functions, is then provided, having care of addressing new computational challenges, mainly the parallelization of the process via MPI (Message Passing Interface): parallelization is somehow a typical routine in standard Monte Carlo experiments, where, due to the independence of the sampling procedure, the maximal benefit in terms of CPU time gain could be achieved; however the same is not so straightforward in MCMC experiments. I will show how a simple application of parallelization in MCMCs is still useful, both in terms of a better exploration of model and parameter space and in terms of time savings. Finally, the afore-mentioned Gretl package opens the possibility of an automated procedure ready for use for the common user too, with the implicit recommendation of ``reading carefully the package leaflet''. The economic implications of model averaging are explored in a Treatment Evaluation problem with Propensity Score matching: model averaging is commonly used in forecasting problems with linear models, where the driving-idea of producing an estimate which balances the ones of each specification (properly weighted) leads to more robustness with respect to ``guessing'' any single one. In Propensity Score evaluation the choice of which variables should be included is often ignored, and the consequent specification could be determinant in the final treatment effect estimation; a similar argument can be, therefore, applied: could model averaging be profitable in the Propensity Score definition instead of guessing which variable should be included? I will investigate, as empirical illustration, the economic effect of tax rebates on consumption, using as case study the 2014 Italian income tax rebate, which introduced an increase in individual monthly salary of 80 euro to employees. A dataset from the ``Survey on Household Income and Wealth'' (SHIW) held by the Bank of Italy is built and three different techniques concerning model averaging are compared: a first one, which uses the model averaged posterior mean of the parameter of interest to build the Propensity Score, and then performs matching and treatment evaluation in the Frequentist manner; a second one averages the Frequentist treatment effects across the different models recognized by the BMA procedure, using as weight the posterior model distributions; and finally, a fully Bayesian procedure in which each sampled parameter by the BMA procedure defines a propensity score used to derive a model specific treatment effect, which is then aggregated according to the model probabilities. Matching is performed via pairwise nearest neighborhood under different caliper and different data order: the last two BMA methodologies balance the estimates between different specifications, leading to a treatment evaluation less affected by the discretion in the choice of which variables include in the Propensity Score definition; the first proposed BMA technique, instead, does not always guarantee this condition. Moreover, it manifests a higher variability across different matching set-ups, as opposed to the fully Bayesian method which shows the highest robustness taking into account an additional source of uncertainty: the propensity score distribution.File | Dimensione | Formato | |
---|---|---|---|
Tesi_Pedini.pdf
Open Access dal 01/10/2020
Descrizione: Tesi_Pedini.pdf
Tipologia:
Tesi di dottorato
Licenza d'uso:
Creative commons
Dimensione
1.45 MB
Formato
Adobe PDF
|
1.45 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.