Package org.apache.spark.sql
package org.apache.spark.sql
-
ClassDescriptionThrown when a query fails to analyze, usually because the query itself is invalid.A column that will be computed based on the data in a
DataFrame
.A convenient class used for constructing schema.Trait to restrict calls to create and replace operations.Functionality for working with missing data inDataFrame
s.Interface used to load aDataset
from external storage systems (e.g.Statistic functions forDataFrame
s.Interface used to write aDataset
to external storage systems (e.g.Interface used to write aDataset
to external storage using the v2 API.Dataset<T>A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.A container for aDataset
, used for implicit conversions in Scala.Encoder<T>Used to convert a JVM object of typeT
to and from the internal Spark SQL representation.EncoderImplicits used to implicitly generate SQL Encoders.Methods for creating anEncoder
.:: Experimental :: Holder for experimental methods for the bravest.A trait for a session extension to implement that provides addition explain plan information.The abstract class for writing custom logic to process data generated by a query.Commonly used functions available for DataFrame operations.ADataset
has been logically grouped by a user specified grouping key.Lower priority implicit methods for converting Scala objects intoDataset
s.MergeIntoWriter
provides methods to define and execute merge actions based on specified conditions.Helper class to simplify usage ofDataset.observe(String, Column, Column*)
:Represents one row of output from a relational operator.A factory class used to constructRow
objects.Runtime configuration interface for Spark.SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.The entry point to programming Spark with the Dataset and DataFrame API.:: Experimental :: Holder for injection points to theSparkSession
.:: Unstable ::The entry point for working with structured data (rows and columns) in Spark 1.x.This SQLContext object contains utility functions to create a singleton SQLContext instance, or to get the created SQLContext instance.A collection of implicit methods for converting common Scala objects intoDataset
s.Interface for invoking table-valued functions in Spark SQL.TypedColumn<T,U> Functions for registering user-defined functions.WhenMatched<T>A class for defining actions to be taken when matching rows in a DataFrame during a merge operation.A class for defining actions to be taken when no matching rows are found in a DataFrame during a merge operation.A class for defining actions to be performed when there is no match by source during a merge operation in a MergeIntoWriter.Configuration methods common to create/replace operations and insert/overwrite operations.