Play-Spark-Scala

所属分类:云计算
开发工具:Scala
文件大小:48KB
下载次数:0
上传日期:2015-08-13 11:13:51
上 传 者sh-1993
说明:  玩Spark Scala,,
(Play-Spark-Scala,,)

文件列表:
app (0, 2014-07-21)
app\controllers (0, 2014-07-21)
app\controllers\Application.scala (327, 2014-07-21)
app\utils (0, 2014-07-21)
app\utils\SimpleUtility.scala (626, 2014-07-21)
app\utils\SparkMLLibUtility.scala (1329, 2014-07-21)
app\utils\SparkSQL.scala (1212, 2014-07-21)
app\utils\TwitterPopularTags.scala (2722, 2014-07-21)
app\views (0, 2014-07-21)
app\views\index.scala.html (65, 2014-07-21)
app\views\main.scala.html (904, 2014-07-21)
build.sbt (520, 2014-07-21)
conf (0, 2014-07-21)
conf\application.conf (1870, 2014-07-21)
conf\routes (321, 2014-07-21)
project (0, 2014-07-21)
project\build.properties (19, 2014-07-21)
project\plugins.sbt (297, 2014-07-21)
public (0, 2014-07-21)
public\data (0, 2014-07-21)
public\data\sample_naive_bayes_data.txt (48, 2014-07-21)
public\images (0, 2014-07-21)
public\images\favicon.png (687, 2014-07-21)
public\javascripts (0, 2014-07-21)
public\javascripts\jquery-1.9.0.min.js (93068, 2014-07-21)
public\stylesheets (0, 2014-07-21)
public\stylesheets\main.css (0, 2014-07-21)
public\stylesheets\style.css (5780, 2014-07-21)
test (0, 2014-07-21)
test\ApplicationSpec.scala (789, 2014-07-21)
test\IntegrationSpec.scala (564, 2014-07-21)

spark-scala =========== Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Scala, Java, and Python that make parallel jobs easy to write, and an optimized engine that supports general computation graphs. It also supports a rich set of higher-level tools including Shark (Hive on Spark), MLlib for machine learning, GraphX for graph processing, and Spark Streaming. This is Spark Application which is built in Play 2.2.0. We can build it in any Play version. One thing that we have to keep in mind is the Akka version should be compatible to both Spark & Play. So, check the Akka version in Spark & Play that are inbuilt. Commands to run this Application: 1. play clean 2. play compile 3. play dist 4. play run Note: > Whenever a change is made in the application, then run play dist command otherwise changes in the Spark files/functions will be not be reflected. > To run only Spark Streaming, Spark SQL, Spark MLLib part you dont need to run command - "play dist". We have upgraded our application to Apache Spark 1.0.1.

近期下载者

相关文件


收藏者