scala – 什么是版本库spark支持的SparkSession
发布时间:2020-12-16 09:25:17 所属栏目:安全 来源:网络整理
导读:使用SparkSession代码Spark. import org.apache.spark.SparkConf import org.apache.spark.SparkContext val conf = SparkSession.builder .master("local") .appName("testing") .enableHiveSupport() // - enable Hive support. .getOrCreate() 代码pom.xm
使用SparkSession代码Spark.
import org.apache.spark.SparkConf import org.apache.spark.SparkContext val conf = SparkSession.builder .master("local") .appName("testing") .enableHiveSupport() // <- enable Hive support. .getOrCreate() 代码pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.cms.spark</groupId> <artifactId>cms-spark</artifactId> <version>0.0.1-SNAPSHOT</version> <name>cms-spark</name> <pluginRepositories> <pluginRepository> <id>scala-tools.org</id> <name>Scala-tools Maven2 Repository</name> <url>http://scala-tools.org/repo-releases</url> </pluginRepository> </pluginRepositories> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.6.0</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>1.6.0</version> </dependency> <dependency> <groupId>com.databricks</groupId> <artifactId>spark-csv_2.10</artifactId> <version>1.4.0</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-hive_2.10</artifactId> <version>1.5.2</version> </dependency> <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.8.3</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.5.3</version> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>make-assembly</id> <!-- this is used for inheritance merges --> <phase>install</phase> <!-- bind to the packaging phase --> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> 我有一些问题.我使用SparkSession创建代码spark,我很难在SparkSql库中找不到SparkSession.所以我不能运行代码火花.我想问什么是在Spark中找到SparkSession的版本.我给代码pom.xml. 谢谢. 解决方法
你需要核心和SQL工件
<repositories> <repository> <id>cloudera</id> <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url> </repository> </repositories> <dependencies> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>2.0.0-cloudera1-SNAPSHOT</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>2.0.0-cloudera1-SNAPSHOT</version> </dependency> </dependencies> (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |