颜色分类leetcode-EEG-Classification:该项目是与UNL和UCDAnschutz的神经学实验室共同努力,

  • N8_679986
    了解作者
  • 16.4MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-04-21 04:45
    上传日期
颜色分类leetcode 脑电图分类 该项目是与 UNL 和 UCD Anschutz 的神经学实验室共同努力,使用深度学习对 EEG 数据进行分类。 目标是使用各种数据处理技术和深度神经网络架构,在 EEG 数据分类中保留空间和时间信息。 有关此项目的更简洁和视觉上令人愉悦的演示,请参阅随附的 PDF。 (galvanize_36x48_Tevis_Gehr_EEG_2.pdf) 介绍 脑电图 (EEG) 是一种测试,它使用附着在头皮上的小型扁平金属盘(电极)来检测大脑中的电活动。 您的脑细胞通过电脉冲进行交流,并且一直处于活跃状态,即使在您睡着时也是如此。 此活动在 EEG 记录上显示为波浪线。 [梅奥诊所] 该项目的目标是从 EEG 数据中对大脑状态进行分类。 CU Anschutz/ULN 联合项目在指导受试者可视化执行基于运动的任务的会话期间收集了受试者的 EEG 数据。 每个受试者执行一个会话,可视化一项非常熟悉的任务,另一个会话可视化一项不熟悉的任务。 主要目标是开发一个分类器,可以正确识别主体是否正在可视化熟悉或不熟悉的任务。 次要目标包括深入了解哪些大脑区域和频带与各
EEG-Classification-master.zip
  • EEG-Classification-master
  • .ipynb_checkpoints
  • Untitled-checkpoint.ipynb
    72B
  • Rev1-1-checkpoint.ipynb
    1.4MB
  • Initial_EDA-checkpoint.ipynb
    4.2MB
  • Validation_Set_is_from_Unseen_Sessions-checkpoint.ipynb
    57.6KB
  • EDA_2-checkpoint.ipynb
    63.9KB
  • __pycache__
  • eeg_learn_functions.cpython-35.pyc
    15KB
  • Pictures
  • nebraska-n.png
    12.2KB
  • CU_Anschutz_Med_botlogo.png
    11.5KB
  • hanning.png
    106.5KB
  • probe_locations.png
    46.6KB
  • eeg_learn_overview_architecture.png
    254.4KB
  • projections.png
    24.6KB
  • four_channels.png
    278.6KB
  • mark-github.png
    11.1KB
  • one-second-wave-n-fft-2.png
    68.7KB
  • nebraska-n.jpg
    138.4KB
  • keras_summary.png
    158.3KB
  • waveform-and-fft.png
    131.6KB
  • cuboulder-large.png
    22.3KB
  • two_channels.png
    204.3KB
  • data
  • ML101_US.csv
    1.3MB
  • ML107_KS.csv
    1.5MB
  • ML105_US.csv
    1.4MB
  • ML101_KS.csv
    1.4MB
  • ML103_KS.csv
    1.4MB
  • ML104_KS.csv
    1.2MB
  • ML108_KS.csv
    1.4MB
  • ML108_US.csv
    1.4MB
  • ML107_US.csv
    1.5MB
  • ML102_US.csv
    1.3MB
  • ML106_US.csv
    1.7MB
  • ML105_KS.csv
    1.3MB
  • ML103_US.csv
    1.2MB
  • ML102_KS.csv
    1.3MB
  • ML104_US.csv
    1.2MB
  • ML106_KS.csv
    1.3MB
  • README.md
    7.5KB
  • galvanize_36x48_Tevis_Gehr_EEG_2.pdf
    1.2MB
  • Rev1.ipynb
    2.9MB
  • train_pipeline_2.py
    7KB
  • train_pipeline.py
    6.8KB
  • Rev0.ipynb
    102.3KB
  • train_pipeline_3.py
    7KB
  • train_pipeline_5_no_overlap.py
    7KB
  • Rev1-1.ipynb
    1.5MB
  • train_pipeline_4.py
    7KB
  • galvanize_36x48_Tevis_Gehr_EEG.key
    2.5MB
  • Initial_EDA.ipynb
    4.5MB
  • eeg_learn_functions.py
    23.2KB
内容介绍
[//]: # (Image References) [image1]: ./Pictures/keras_summary.png "keras_summary" [image2]: ./Pictures/eeg_learn_overview_architecture.png "eeg_learn_overview_architecture" [image3]: ./Pictures/hanning.png "hanning" [image4]: ./Pictures/one-second-wave-n-fft-2.png "one-second-wave-n-fft-2" [image5]: ./Pictures/projections.png "projections" [image6]: ./Pictures/four_channels.png "four_channels" # EEG-Classification This project is a joint effort with neurology labs at UNL and UCD Anschutz to use deep learning to classify EEG data. The goal is to use various data processing techniques and deep neural network architectures to perserve both spacial and time information in the classification of EEG data. For a more concise and visually pleasing presentation of this project, please see the included PDF. (galvanize_36x48_Tevis_Gehr_EEG_2.pdf) ## Introduction An electroencephalogram (EEG) is a test that detects electrical activity in your brain using small, flat metal discs (electrodes) attached to your scalp. Your brain cells communicate via electrical impulses and are active all the time, even when you're asleep. This activity shows up as wavy lines on an EEG recording. [Mayo Clinic] The goal of this project is to classify brain states from EEG data. A joint CU Anschutz/ULN project has collected EEG data on subjects during sessions in which the subjects were instructed to visualize performing a motor-based task. Each subject performed one session visualizing a very familiar task, and another session visualizing an unfamiliar task. The primary goal is to develop a classifier that can correctly identify whether a subject is visualizing a task that is familiar or unfamiliar. Secondary goals include providing insight into which brain regions and frequency bands associate with each of the respective classes. If a deep learning approach is found to be viable, these insights may correspond to latent features found within the neural network. Other insights may be obtained from more traditional data processing and machine learning techniques. ## The Data The data are in the form of csv files with raw waveform signals from 14 probes places around the scalp. The sampling rate is 128 hz, which allows for frequency analysis up to ~60 hz. Each of 8 subjects participated in two 1 minute sessions. Therefore the total number of datapoints is on the order of 14x128x60x8x2 = 1,720,320. Several additional subjects are expected to perform recording sessions during the next few weeks. The image below shows the raw waveform data from four of the 14 channels during a typical session. EMG signals (such as those causes by swallowing or yawning) were manually removed. ![alt text][image6] #### Figure 1: Raw waveform data from four of the 14 EEG probes ## Tiers The minimum result required for this project to be a full success is to have developed a classifier that is capable of accurately classifying snippets of EEG session data as being from the visualization of either a familiar or an unfamiliar skill. Because this is a binary classification problem with balanced classes, the minimum baseline for accuracy is 0.5. Full success would mean having an accuracy of at least 70% (although this number is arbitrary). State-of-the-art EEG classification techniques currently score considerably higher than this [1][2]. Data processing and augmentation is expected to be important and multiple approaches will be considered. Once a viable classifier has been developed, the goals are twofold. First to use modern deep learning techniques to maximize the test accuracy. One proposed approach is outlined in a paper that proposes projecting the transformed EEG frequencies into a 2D image with a depth dimension for frequency band [1]. This format makes for ideal inputs into a standard convolutional neural network. This or another deep learning approach may be used to achieve the highest possible accuracy. The second set of goals center around providing insight into the underlying mechanisms in brain function. The methods for accomplishment of these goals will depend on the details of the machine learning algorithms that are able to successfully classify the EEG data. ## Approach Relying on previous EEG research done by Beshivan et. al.[1], as well as the latest advances in video classification[3], the approach was to process the 14-channel time-series data into discreet one-second ‘frames’ and project these frames onto a 2D map of the surface of the head. Then a convolutional neural network (CNN) was trained to classify frames. ![alt text][image2] #### Figure 2:EEG classification architecture proposed by [1]. ## Data Processing Following is a desciption of data processing techniques used in this project. **Hanning Window:** First the data were chopped up into overlapping 1-second ‘frames’ and a Hanning window was applied. **Fast Fourier Transform(FFT):** FFT was applied to transform data for each frame from time domain to frequency domain. **Frequency Binning:** FFT amplitudes were grouped into theta(4-8Hz), alpha(8-12Hz), and beta(12-40Hz) ranges, giving 3 scalar values for each probe per frame. **2D Azimuthal Projection:** These 3 values were interpreted as RGB color channels and projected onto a 2D map of the head. ![alt text][image3] #### Figure 3: Hanning windowed one-second frame and FFT. Overlapping one-second 'frames'. ![alt text][image4] #### Figure 4: One 'frame'. Projections ![alt text][image5] #### Figure 5: 2D projections of theta, alpha and beta ranges. ## Network Architecture A convolutional neural network was iteratively constructed and tuned to give the best classification accuracy with the data availible. The final architecture is shown below. #### Table 1: Summary of Convolutional Neural Network in Keras ![alt text][image1] ## Results and Discussion The results obtained are encouraging. Without even using a recurrent neural network (which is the next logical step, see [1]), the CNN is able to correctly classify the test subject’s brain-state about 8.5 times out of 10. This is likely high enough to enable a new level of performance with brain-computer interface (BCI) technologies. However, the best results were obtained when the network was trained on samples from the same recording session. While this may be practical for basic brain research, it would be less practical for use in BCI technology. The results obtained suggest that while EEG signals do indeed generalize between individuals, there are still significant variations between individuals, which is an unsurprising finding. This further suggests that using EEG for BCI will likely require an iterative approach of training on a large population and then fine tuning on a specific individual. It is therefore recommended that future research be done on the possible application of Transfer Learning techniques to the classification of EEG signals. ## Citations #### [1] Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks #### 19 Nov 2015. Bashivan et al. Cornell University Library. https://arxiv.org/abs/1511.06448 #### [2] A novel deep learning approach for classification of EEG motor imagery signals #### 30 Nov 2016. Tabar and Halici. IOP Publishing. http://iopscience.iop.org/article/10.1088/1741-2560/14/1/016003/meta #### [3] Beyond Short Snippets: Deep Networks for Video Classification 13 Apr 2015. Ng et al. Cornell University Library. https://arxiv.org/abs/1503.08909
评论
    相关推荐