Skip to content
This repository has been archived by the owner on Oct 1, 2024. It is now read-only.

spark empty dataframe with column not able to write in big query #6

Open
yogesh-0586 opened this issue Jan 30, 2019 · 1 comment
Open
Assignees

Comments

@yogesh-0586
Copy link

yogesh-0586 commented Jan 30, 2019

Writing empty data frame with column name show following error ;

Data frame as below

+------+---------+-------+
|id|start|end|
+------+---------+-------+
+------+---------+-------+

root
|-- id: string (nullable = true)
|-- start: long (nullable = false)
|-- end: long (nullable = false)

-ERROR-:com.miraisolutions.spark.bigquery.client.BigQueryClient.waitForJob(BigQueryClient.scala:375), com.miraisolutions.spark.bigquery.client.BigQueryClient.importTable(BigQueryClient.scala:356), com.miraisolutions.spark.bigquery.DefaultSource$$anonfun$createRelation$2$$anonfun$apply$4.apply(DefaultSource.scala:74), com.miraisolutions.spark.bigquery.DefaultSource$$anonfun$createRelation$2$$anonfun$apply$4.apply(DefaultSource.scala:63), com.miraisolutions.spark.bigquery.DefaultSource$TypeParameter.foldType(DefaultSource.scala:218), com.miraisolutions.spark.bigquery.DefaultSource$$anonfun$createRelation$2.apply(DefaultSource.scala:63), com.miraisolutions.spark.bigquery.DefaultSource$$anonfun$createRelation$2.apply(DefaultSource.scala:62), com.miraisolutions.spark.bigquery.DefaultSource$.com$miraisolutions$spark$bigquery$DefaultSource$$withBigQueryClient(DefaultSource.scala:202), com.miraisolutions.spark.bigquery.DefaultSource.createRelation(DefaultSource.scala:62), org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:469), org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:50), org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58), org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56), org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74), org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117), org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117), org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138), org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151), org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135), org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116), org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92), org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92), org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:609), org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233),

@spoltier
Copy link
Member

Hi, can you provide the code you are trying to execute, or at least the exception that is causing that stack trace ? I'm not able to obtain an error specific to the Dataframe being empty.

Regards
Simon

@spoltier spoltier self-assigned this Aug 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants