We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For my spylon notebook I:
what I found was that most operations work, like reading datasets using the sparkSession and showing them and stuff
However, when I tried to use the sparkContext, it thinks it's not running. Here's the code I was running and the error:
val bRetailersList = (sparkSession.sparkContext .broadcast(trainedModel.itemFactors.select("id") .rdd.map(x => x(0).asInstanceOf[Int]).collect) ) java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext. This stopped SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:423) py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:240) py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) py4j.Gateway.invoke(Gateway.java:236) py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80) py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69) py4j.GatewayConnection.run(GatewayConnection.java:214) java.lang.Thread.run(Thread.java:745) The currently active SparkContext was created at: org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823) org.apache.spark.ml.util.BaseReadWrite$class.sparkSession(ReadWrite.scala:69) org.apache.spark.ml.util.MLReader.sparkSession(ReadWrite.scala:189) org.apache.spark.ml.util.BaseReadWrite$class.sc(ReadWrite.scala:80) org.apache.spark.ml.util.MLReader.sc(ReadWrite.scala:189) org.apache.spark.ml.recommendation.ALSModel$ALSModelReader.load(ALS.scala:317) org.apache.spark.ml.recommendation.ALSModel$ALSModelReader.load(ALS.scala:311) org.apache.spark.ml.util.MLReadable$class.load(ReadWrite.scala:227) org.apache.spark.ml.recommendation.ALSModel$.load(ALS.scala:297) <init>(<console>:53) <init>(<console>:58) <init>(<console>:60) <init>(<console>:62) <init>(<console>:64) <init>(<console>:66) <init>(<console>:68) <init>(<console>:70) <init>(<console>:72) <init>(<console>:74) <init>(<console>:76) at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:101) at org.apache.spark.sql.SparkSession.<init>(SparkSession.scala:80) at org.apache.spark.sql.SparkSession.<init>(SparkSession.scala:77) ... 44 elided
The text was updated successfully, but these errors were encountered:
What is mp.sparkSession?
Sorry, something went wrong.
Sorry, this was transferred from adtech-labs/spylon#45. It's a SparkSession.
No branches or pull requests
For my spylon notebook I:
what I found was that most operations work, like reading datasets using the sparkSession and showing them and stuff
However, when I tried to use the sparkContext, it thinks it's not running. Here's the code I was running and the error:
The text was updated successfully, but these errors were encountered: