ONNX Model format in Flink

Alexey Novakov published on

7 min, 1352 words

The most popular eco-system to train ML model these days is Python and C/C++ based libraries. An example of model training can be a Logistic Regression algorithms, from ScikitLearn package or more advanced neural networks algorithms offered by Tensorflow, Pytorch and others. There are lots of tools and libraries in Python world to facilitate training and model serving.

In order to bring trained models in Keras or SciKitLean into a Flink application, we can use cross-platform file formats such as ONNX and PMML (in the past). These formats also come with language runtimes. In Flink case, we select JVM SDK to run inference logic inside the Flink job.

Let's look at the example on how to train Logistic Regression in Python using Keras and then use trained model in ONNX format inside Flink.



Read More

Using Scala 3 with Apache Flink

Alexey Novakov published on

4 min, 625 words


If you have come here, then you probably know that current version of Apache Flink 1.16 Scala API still depends on Scala 2.12. However, the good news is that previous Flink release 1.15 introduced an important change for Scala users, which allows us to use own Scala version via Flink Java API. That means users can now use Scala 2.13 or 3 to write Flink jobs. Earlier, Flink had Scala 2.12 as a mandatory library on application classpath, so one could not use newer version due to version conflicts.



Official Flink Scala API will be still available in future, but will probably be deprecated at some point. It is unclear at this point. Flink community made a decision to not engage with newer Scala version and make Flink to be Scala-free, in terms of user's Scala version choice. Whether it is good or bad for Scala users in general we will going to see in near future. Definitely this choice is good, as it unlocks Flink for newer Scala versions.

Read More

Spark API Languages

Alexey Novakov published on

7 min, 1244 words



Context

As part of my role of Data Architect at work, I often deal with AWS data services to run Apache Spark jobs such as EMR and Glue ETL. At the very beginning team needed to choose Spark supported programming language and start writing our main jobs for data processing.

Before we further dive into the languages choice, let's quickly remind what is Spark for EMR. Glue ETL is going to be skipped from the blog-post.

Apache Spark is one of the main component of AWS EMR, which makes EMR still meaningful service to be used by Big Data teams. AWS EMR team is building its own Spark distribution to integrate it with other EMR applications seamlessly. Even though Amazon builds own Spark, they keep the same Spark version, which is equal to open source version of Spark. All features of Apache Spark are available in EMR Spark. EMR allows to run a Spark application in EMR cluster via step type called “Spark Application”.

Read More

CDC with Delta Lake Streaming

Alexey Novakov published on

5 min, 969 words



Change Data Capture (CDC) is a popular technique for replication of data from OLTP to OLAP data store. Usually CDC tools integrate with transactional logs of relational databases and thus are mainly dedicated to replicate all possible data changes from relational databases. NoSQL databases are usually coming with built-in CDC for any possible data change (insert, update, delete), for example AWS DynamoDB Streams.

In this blog-post, we will look at Delta Lake table format, which supports "merge" operation. This operation is useful when we need to update replicated data in Data Lake.

Read More