As the big data era comes, computing endless data in real-time has become a necessity in many scenarios. Take Baidu as an example, trillions of data comes to real-time computation platform everyday. From the year of 2011, DStream, a true streaming computation engine with its own scheduler system have been proposed, implemented and put into practice. It supports low-level but flexible API and configuration. Moreover, it support logging / monitoring / paging / tracing / releasing / dictionary / etc., which are crucial in production. Along the time, as DStream are for developers and needs learning curve, Spark Streaming are introduced for data scientists. Our team follows the Spark community. Moreover, best practices with DStream in production complexity are contributed back to SparkStreaming. We adapt Baidu home-brewed storage, messaging system, PaaS, etc., to Spark Streaming. In this session, we’d like to share our experience with DStream and Spark Streaming in Baidu.