Search
Close this search box.

Apache Beam just had its first API stable release. Now that we have an API stable release, I want to update what’s changed in the Beam ecosystem. I want to highlight the growth of Beam as a project and the increased usage of Beam in pre-production/development or production deployments. Each committer and user is sharing their own opinion and not necessarily that of their company.

If you want to see how far Beam has come, you can the original Q and A I did for the first release of Beam.

Our interviewees are:

How do you describe Beam to others?

JA: I describe Beam as a cross language and framework independent way of creating data pipelines.

JX: A unified API to do both batch and streaming computation — The new assembly language for big data processing.

JL: Apache Beam is a portable and unified and cross language programing model. It is very flexible too.(I love StatefulDoFn)

JF: I describe it as like Spark or Hadoop but with more plumbing OOtB, including deployment of 3rd party libraries (jars) and autoscaling of workers.

FP: A set of abstractions for expressing data processing use cases across the spectrum from simple batch to complex stream processing, plus the ability to mix and match SDKs and runners.

IM: Apache Beam is a unified programming model to express both batch and streaming data processing jobs (pipelines). Beam enables users to easily tune requirements around completeness and latency and let them run the same code across multiple runtime environments.

What would you say to convince someone to use Beam for their next project?

LC: Python support

KK: Beam is the state of the art for writing unified batch/streaming big data processing pipelines, letting you use the latest innovations in a variety of operational scenarios (on premise, managed service, various engines) and your programming language of choice (currently Java, soon Python, more later).

JX: Focus on what you really care about — business logic, let the professionals care about the how to implement your business logic in big data engines.

JL: Just write your business logic and run it in the best engine!

JL: It is the next-gen after MapReduce/Hadoop/Spark. Much easier to use. Just focus on your logic, not the plumbing/

AA: Python support and nice integration with tensorflow through the tensorflow-transform project

IM: Beam is future-proof, you won’t need to rewrite your code if you plan to move into a different distributed system or cloud, this gives you incredible flexibility. Additionally it is the most complete model for big data processing.

What’s your favorite feature of Beam?

JL: StatefulDoFn and windowing/trigger.

FP: The ability to easily adapt your business logic as your needs around latency and completeness change — from batch to assorted streaming use cases.

IM: The ability to run the same code in different distributed systems.

Are you running or about to run Beam in production? What has been your experience? Has it changed anything about your developer experience?

JL: I have wrote some ETL jobs and data statistics jobs. It made my code more useful, not just on an engine.

IM: We at Talend use Beam for our Data Preparation solution already in production. Personally using Beam has been an eye opener for me. I had previous experience with other Big Data frameworks and at the beginning the abstractions of Beam didn’t seem to me as user friendly as those of the other systems, but now I have the impression that other APIs hide complexity issues to captivate casual users, but this hidden complexity ends biting in the end, in particular the closer you get to ops. Beam is lean, clean and straightforward, it is an approach with no surprises, this is something developers care about.

If you’re non-Google, why are you working on Beam? If you are from Google, why were you drawn to working on Beam? If you’ve been working on Beam for a while, has your motivation changed over time?

LC: (Google) Combining all the knowledge that Google had developed internally with all the insights produced by the external community is an awesome challenge and bodes well for the data processing & analytics community overall.

KK: When we started Beam, I was excited to be a part of pushing the state of the art in unifying batch and streaming executions – for users, unifying historical analysis with ongoing low-latency results. There’s still great stuff happening around stream/table duality and the relationship between DAG-based models and higher-level querying. But looking ahead I am focused on cross-language portability. We have a variety of great processing engines implemented primarily in Java. I want to make these available to any language via our portability APIs. Python and R are obvious choices for data lovers.

JX: Non-Google(Alipay). I am currently working on BeamSQL. Batch processing is very mature in our company, but streaming processing is not. We want to utilize BeamSQL to enable proficient batch processing users to reuse their knowledge in streaming processing, let them focus on business logic — let the BEAM do the rest.

JL: Non-Google(Alibaba). Flink、Spark、Storm, a lot of engines lead to user learning and development costs too much, I hope there is a standard model, to make big data user programming easier.

JF. Non-Google (Freightos). Beam looked like the easiest way to do simple ETL. (Actually, we are just copying a Datastore.) And it was easy, except for several cases of swallowed exceptions that were very hard to debug, and incompatibility of transitive Maven dependencies with the Google Cloud Datastore libraries. (For some reason, Beam uses an ad-hoc library to access Datastore.) But the real blocker was Issue BEAM-991. I am pleased that it is resolved for an upcoming release, but I am puzzled that this bug has been in the codebase since Datastore support began–yet users continue to use Beam. Could it be that no one is using Beam with Datastore, or the few that do have very small entities (<10 KB on average)?

FP (Googler): I worked on the internal Google technology that preceded Beam and was excited by the opportunity to integrate those ideas with the broader ecosystem.

IM: Non-Google (Talend). We (Talend) produce data integration solutions and we found that we needed to rewrite our codebase every time a new data processing framework got traction, Beam offers a solution to this problem, and not only an abstraction layer but also a rich model that covers all of our use cases. My motivation for Beam has evolved with the needs of the project. The first stable release is a great moment because we can start now focusing in higher level abstractions and we can work on the missing data connectors so users can use all the different data stores existing in their servers or in the cloud.

What’s a feature that you’d really like to see added to Beam?

LC: Iterative computation to better support machine learning initiatives.

KK: Transparent and automatically correct use of retractions, to unlock the stream/table duality and queryable state. Iterative computation (loops) with correct watermarks.

JX: Like what LC said, but include iterative computation in Beam might be a big design challenge.

JL: Support machine learning, Retracting, Schema support(like Dataframes, it will give the underlying engine more space to optimize performance.).

JF: A status-update callback, to allow a monitoring tool to track progress and detect end of processing, including success/failure.

IM: A unified way to submit and monitor pipelines, like a REST service that allows to execute and monitor what is going on independently of the processing engine.

How would you recommend a new person get started with Beam?

KK: If you like to just get something running, we’ve got a Java quickstart and a Python quickstart, whichever works for you. For a sense of what it is like to work with Beam, try the WordCount walkthrough. And if you prefer videos, there are also a ton of talks on Youtube with little Beam samples, demos, and background on Beam’s ideas.

IM: Play with the examples, but more important try to solve a problem, even a trivial one with Beam. Find a dataset that you want to analyze and do it with Beam. Translate a program from Hadoop, Storm, Spark or whatever system you already know. The best way to learn is active-learning, and don’t be afraid to ask questions, we have an open community and the user mailing-list to help everyone.

How would you recommend a person start contributing to Beam?

LC: This is an easy one, https://beam.apache.org/contribute/contribution-guide/

KK: What Luke said :-). But if you do one thing, subscribe to dev@beam.apache.org!

TY: Also, subscribe to user@beam.apache.org to get to know what cases Beam users have

JX: Find the component which you are really interested in, then read the code for the component, learn from other’s pull request, contribute some small PR, then you are on the way.

AA: Look for the starter bugs in JIRA. They are tagged with starter newbie labels.

IM: Start humble, find a need you have and is not covered, e.g. fix a bug that you found, or the data connector that you need and it does not exist. It can seem hard at the beginning but Beam has good documentation now, a stable API and a great community to help you.