Class

ksb.csle.component.pipe.stream.writer

KafkaPipeWriter

Related Doc: package writer

Permalink

class KafkaPipeWriter extends BasePipeWriter[DataFrame, StreamingQuery, StreamPipeWriterInfo, SparkSession]

:: ApplicationDeveloperApi ::

Writer that writes datafrom to kafka.

Linear Supertypes
BasePipeWriter[DataFrame, StreamingQuery, StreamPipeWriterInfo, SparkSession], BaseGenericPipeOperator[DataFrame, DataFrame, StreamingQuery, StreamPipeWriterInfo, SparkSession], BaseGenericMutantOperator[StreamPipeWriterInfo, DataFrame, (DataFrame) ⇒ StreamingQuery], BaseDoer, Logging, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KafkaPipeWriter
  2. BasePipeWriter
  3. BaseGenericPipeOperator
  4. BaseGenericMutantOperator
  5. BaseDoer
  6. Logging
  7. Serializable
  8. Serializable
  9. AnyRef
  10. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new KafkaPipeWriter(o: StreamPipeWriterInfo, session: SparkSession)

    Permalink

    o

    Object that contains message ksb.csle.common.proto.DatasourceProto.KafkaPipeWriterInfo KafkaPipeWriterInfo contains attributes as follows:

    • mode: Write mode such as append, update, and complete
    • trigger: triggering interval
    • bootStrapServers: Address of Kafka server (required)
    • zooKeeperConnect: Address of Kafka Zookeeper (required)
    • topic: Topic where fetch data (required)
    • chechpointLocation: checkpoint file path
    • failOnDataLoss: Determines whether or not a streaming query should fail if it's possible data has been lost (e.g., topics are deleted, offsets are out of range). It is important to monitor your streaming queries, especially with temporal infrastructure like Kafka. Offsets typically go out of range when Kafka's log cleaner activates. If a specific streaming query can not process data quickly enough it may fall behind the earliest offsets after the log cleaner rolls a log segment Sometimes failOnDataLoss may be a false alarm. You can disable it if it is not working as expected based on your use case. Refer to followed site for more information. https://github.com/vertica/PSTL/wiki/Kafka-Source

    KafkaPipeWriterInfo

    message KafkaPipeReaderInfo {
      required string mode = 1 [default="append"];
      optional string trigger = 2;
      required string bootStrapServers = 3;
      required string zooKeeperConnect = 4;
      required string topic = 5;
      required chechpointLocation: checkpoint file path
      required bool failOnDataLoss = 7 [default = false];
    }

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. def close: Unit

    Permalink
    Definition Classes
    KafkaPipeWriter → BasePipeWriter
  7. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  8. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  9. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  10. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  11. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  12. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  13. val logger: Logger

    Permalink
    Attributes
    protected
    Definition Classes
    Logging
  14. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  15. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  16. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  17. val o: StreamPipeWriterInfo

    Permalink

    Object that contains message ksb.csle.common.proto.DatasourceProto.KafkaPipeWriterInfo KafkaPipeWriterInfo contains attributes as follows:

    Object that contains message ksb.csle.common.proto.DatasourceProto.KafkaPipeWriterInfo KafkaPipeWriterInfo contains attributes as follows:

    • mode: Write mode such as append, update, and complete
    • trigger: triggering interval
    • bootStrapServers: Address of Kafka server (required)
    • zooKeeperConnect: Address of Kafka Zookeeper (required)
    • topic: Topic where fetch data (required)
    • chechpointLocation: checkpoint file path
    • failOnDataLoss: Determines whether or not a streaming query should fail if it's possible data has been lost (e.g., topics are deleted, offsets are out of range). It is important to monitor your streaming queries, especially with temporal infrastructure like Kafka. Offsets typically go out of range when Kafka's log cleaner activates. If a specific streaming query can not process data quickly enough it may fall behind the earliest offsets after the log cleaner rolls a log segment Sometimes failOnDataLoss may be a false alarm. You can disable it if it is not working as expected based on your use case. Refer to followed site for more information. https://github.com/vertica/PSTL/wiki/Kafka-Source

    KafkaPipeWriterInfo

    message KafkaPipeReaderInfo {
      required string mode = 1 [default="append"];
      optional string trigger = 2;
      required string bootStrapServers = 3;
      required string zooKeeperConnect = 4;
      required string topic = 5;
      required chechpointLocation: checkpoint file path
      required bool failOnDataLoss = 7 [default = false];
    }
  18. final def operate(df: DataFrame): (DataFrame) ⇒ StreamingQuery

    Permalink
    Definition Classes
    BasePipeWriter → BaseGenericPipeOperator → BaseGenericMutantOperator
  19. val p: KafkaPipeWriterInfo

    Permalink
  20. val session: SparkSession

    Permalink
  21. def stop: Unit

    Permalink
    Definition Classes
    BasePipeWriter → BaseGenericPipeOperator → BaseGenericMutantOperator
  22. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  23. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  24. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. def write(df: DataFrame): StreamingQuery

    Permalink
    Definition Classes
    KafkaPipeWriter → BasePipeWriter

Inherited from BasePipeWriter[DataFrame, StreamingQuery, StreamPipeWriterInfo, SparkSession]

Inherited from BaseGenericPipeOperator[DataFrame, DataFrame, StreamingQuery, StreamPipeWriterInfo, SparkSession]

Inherited from BaseGenericMutantOperator[StreamPipeWriterInfo, DataFrame, (DataFrame) ⇒ StreamingQuery]

Inherited from BaseDoer

Inherited from Logging

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped