kaggle-avazu-rank1

所属分类:数据挖掘/数据仓库
开发工具:WINDOWS
文件大小:260KB
下载次数:7
上传日期:2017-12-18 11:38:08
上 传 者秦冰
说明:  Kaggle比赛中对于亚马逊的广告点击率预估rank1,其中包含相关数据和代码
(the Kaggle competition, the ad click rate for Amazon is estimated to be Rank1, which contains relevant data and code)

文件列表:
kaggle-avazu-rank1 (0, 2016-11-09)
kaggle-avazu-rank1\add_dummy_label.py (570, 2016-11-09)
kaggle-avazu-rank1\bag (0, 2016-11-09)
kaggle-avazu-rank1\bag\converter (0, 2016-11-09)
kaggle-avazu-rank1\bag\converter\6.py (2737, 2016-11-09)
kaggle-avazu-rank1\bag\converter\common.py (2405, 2016-11-09)
kaggle-avazu-rank1\bag\converter\group6.py (5899, 2016-11-09)
kaggle-avazu-rank1\bag\mark (0, 2016-11-09)
kaggle-avazu-rank1\bag\mark\Makefile (335, 2016-11-09)
kaggle-avazu-rank1\bag\mark\src (0, 2016-11-09)
kaggle-avazu-rank1\bag\mark\src\common.cpp (2529, 2016-11-09)
kaggle-avazu-rank1\bag\mark\src\common.h (6647, 2016-11-09)
kaggle-avazu-rank1\bag\mark\src\timer.cpp (602, 2016-11-09)
kaggle-avazu-rank1\bag\mark\src\timer.h (232, 2016-11-09)
kaggle-avazu-rank1\bag\mark\src\train.cpp (5355, 2016-11-09)
kaggle-avazu-rank1\bag\run (0, 2016-11-09)
kaggle-avazu-rank1\bag\run\6.py (1480, 2016-11-09)
kaggle-avazu-rank1\bag\run.sh (489, 2016-11-09)
kaggle-avazu-rank1\bag\util (0, 2016-11-09)
kaggle-avazu-rank1\bag\util\cat_id_click.py (585, 2016-11-09)
kaggle-avazu-rank1\bag\util\cat_submit.py (686, 2016-11-09)
kaggle-avazu-rank1\bag\util\common.py (22, 2016-11-09)
kaggle-avazu-rank1\bag\util\count_feat.py (1029, 2016-11-09)
kaggle-avazu-rank1\bag\util\join_data.py (587, 2016-11-09)
kaggle-avazu-rank1\bag\util\parallel_do.py (564, 2016-11-09)
kaggle-avazu-rank1\bag\util\parallelizer.py (1082, 2016-11-09)
kaggle-avazu-rank1\base (0, 2016-11-09)
kaggle-avazu-rank1\base\converter (0, 2016-11-09)
kaggle-avazu-rank1\base\converter\2.py (1958, 2016-11-09)
kaggle-avazu-rank1\base\converter\common.py (2972, 2016-11-09)
kaggle-avazu-rank1\base\mark (0, 2016-11-09)
kaggle-avazu-rank1\base\mark\mark1 (0, 2016-11-09)
kaggle-avazu-rank1\base\mark\mark1\Makefile (333, 2016-11-09)
kaggle-avazu-rank1\base\mark\mark1\src (0, 2016-11-09)
kaggle-avazu-rank1\base\mark\mark1\src\common.cpp (3003, 2016-11-09)
kaggle-avazu-rank1\base\mark\mark1\src\common.h (6412, 2016-11-09)
kaggle-avazu-rank1\base\mark\mark1\src\timer.cpp (602, 2016-11-09)
kaggle-avazu-rank1\base\mark\mark1\src\timer.h (232, 2016-11-09)
... ...

4 Idiots' Approach for Click-through Rate Prediction ==================================================== Our team consists of: Name Kaggle ID Affiliation ==================================================================== Yu-Chin Juan guestwalk National Taiwan University (NTU) Wei-Sheng Chin mandora National Taiwan University (NTU) Yong Zhuang yolicat National Taiwan University (NTU) Michael Jahrer Michael Jahrer Opera Solutions Our final model is an ensemble of NTU's model and Michael's model. Michael's model is based on his work in Opera Solutions, so he cannot release his part. Therefore, in the codes and documents we only present NTU's model. This README introduces how to run our code up. For the introduction to our approach, please see http://www.csie.ntu.edu.tw/~r01922136/slides/kaggle-avazu.pdf The model we use for this competition is called `field-aware factorization machines.' We have released a package for this model at: http://www.csie.ntu.edu.tw/~r01922136/libffm System Requirement ================== - ***-bit Unix-like operating system - Python 3 - g++ (with C++11 and OpenMP support) - pandas (required if you want to run the `bag' part. See `Step-by-step' below.) Step-by-step ============ Our solution is an ensemble of 20 models. It is organized into the following three parts: name public score private score description =========================================================================== base 0.3832 0.3813 2 basic models bag 0.3826 0.3807 2 models using bag features. ensemble 0.3817 0.3797 an ensemble of the above 4 models and 16 new small models Because the `bag' part consumes a huge amount of memory (more than ***GB), and the `ensemble' part takes a long time to run, this instruction guides you to run our `base' part first. If you want reproduce our best result, please run the commands in the final step on a suitable machine. 1. First, please use the following command to run a tiny example up $ ./run.sh x 2. Create a symbolic link to the training dataset $ ln -sf tr.r0.csv 3. Add a dummy label to the test set $ ./add_dummy_label.py va.r0.csv 4. Checksum $ md5sum tr.r0.csv va.r0.csv f5d49ff28f41dc993b9ecb2372abb033 tr.r0.csv 6edd380a5897bc16b61c5a626062f7b3 va.r0.csv 5. Reproduce our base submission $ ./run.sh 0 Note: base.r0.prd is the submission file 6. (optional) Reproduce our best submission $ ./run_all.sh x If success, then run $ ./run_all.sh 0 Note: The algorithm in the `bag' part is non-deterministic. That is, the result can be slightly different when you run it two or more times. ============== If you want to trace these codes, please be prepared that it will take some efforts. We do not have enough time to polish the codes here to improve the readability. Sorry about it. For any questions and comments, please send your email to: Yu-Chin (guestwalk@gmail.com)

近期下载者

相关文件


收藏者