{"id":67305,"date":"2022-10-05T13:34:02","date_gmt":"2022-10-05T13:34:02","guid":{"rendered":"https:\/\/www.techrepublic.com\/?p=4000142"},"modified":"2022-10-05T13:34:02","modified_gmt":"2022-10-05T13:34:02","slug":"current-2022-confluent-creates-data-pipeline-lifeline","status":"publish","type":"post","link":"https:\/\/cloudnewshub.com\/?p=67305","title":{"rendered":"Current 2022: Confluent creates data pipeline lifeline"},"content":{"rendered":"<figure id=\"attachment_4000148\" aria-describedby=\"caption-attachment-4000148\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-article wp-image-4000148\" src=\"http:\/\/cloudnewshub.com\/wp-content\/uploads\/2022\/10\/current-2022-confluent-creates-data-pipeline-lifeline.jpg\" alt=\"3d illustration of a data and artificial intelligence pipeline in voxel style\" width=\"770\" height=\"433\"><figcaption id=\"caption-attachment-4000148\" class=\"wp-caption-text\">Image: Mark\/Adobe Stock<\/figcaption><\/figure>\n<p>Data flows. Even when data comes to rest, gets sent to backup and possibly finds itself in difficult-to-retrieve long-term storage retirement locations, data generally flows from one place to another during its lifetime.<\/p>\n<p>When data is in motion, it typically moves between applications and their dependent services. But data obviously also moves between applications and operating systems, between application components, containers and microservices \u2014 and, in the always-on era of cloud and the web \u2014 between <a href=\"https:\/\/www.techrepublic.com\/resource-library\/whitepapers\/api-for-dummies-handbook-third-edition\" target=\"_blank\" rel=\"nofollow noopener sponsored noreferrer\">application programming interfaces<\/a>.<\/p>\n<p>Today we know that data flows to such an extent that we now talk about data streaming, but what is it and how do we harness this computing principle?<\/p>\n<h2>What is data streaming?<\/h2>\n<p>A key information paradigm for modern IT stacks, data streaming denotes and describes the movement of data through and between the above-referenced channels in a time-ordered sequence. A close cousin to the concept of computing events, where asynchronous log files are created for every keyboard stroke, mouse click or IoT sensor reading, data streaming oversees system activity related to areas where data flows are generally rich and large.<\/p>\n<p><strong>SEE: <a href=\"https:\/\/www.techrepublic.com\/resource-library\/whitepapers\/hiring-kit-database-engineer\/\" target=\"_blank\" rel=\"nofollow noopener sponsored noreferrer\">Hiring Kit: Database engineer<\/a> (TechRepublic Premium)<\/strong><\/p>\n<p>Confluent is a data streaming platform with a self-proclaimed mission to set data in motion. As complex as data streaming engineering sounds, the team has now created Confluent Stream Designer, a visual interface that is said to enable software developers to build streaming data pipelines in minutes.<\/p>\n<h2>Designing data streams<\/h2>\n<p>Confluent offers a simple point-and-click user interface, but it\u2019s not necessarily a point-and-click UI for you and your favorite auntie or uncle. This is a point-and-click UI that is hoped to be something of an advancement toward making data streams accessible to developers beyond specialized Apache Kafka experts.<\/p>\n<p>Apache Kafka is an open source distributed event streaming platform created by Confluent co-founder and CEO Jay Kreps and his colleagues Neha Narkhede and Jun Rao while the trio worked at LinkedIn. Confluent offers a cloud-native foundational platform for real-time data streaming from multiple sources designed to be the \u201cintelligent connective tissue\u201d for software-driven backend operations that delivers rich front-end user functions.<\/p>\n<p>The theory behind Confluent Stream Designer is that with more teams able to rapidly build and iterate streaming pipelines, organizations can quickly connect more data throughout their business for more agile development alongside better and faster in-the-moment decision-making.<\/p>\n<p>At the <a href=\"https:\/\/2022.currentevent.io\/website\/39543\/welcome\" target=\"_blank\" rel=\"nofollow noopener sponsored noreferrer\">Current 2022: The Next Generation of Kafka Summit<\/a> in Texas, there was the opportunity to speak directly with Confluent about its views and ambitions.<\/p>\n<p>\u201cWe are in the middle of a major technological shift, where data streaming is making real time the new normal, enabling new business models, better customer experiences and more efficient operations,\u201d said Kreps. \u201cWith Stream Designer we want to democratize this movement towards data streaming and make real time the default for all data flow in an organization.\u201d<\/p>\n<h2>Streaming moves in from the edge<\/h2>\n<p>Kreps and the team further state that streaming technologies that were once at the edges have become the core of critical business functions.<\/p>\n<p>Because traditional batch processing can no longer keep pace with the growing number of use cases that depend on millisecond updates, Confluent says that more organizations are pivoting to streaming, as their livelihood is defined by the ability to deliver data instantaneously across customer experiences and business operations.<\/p>\n<aside class=\"pinbox right\">\n<h3 class=\"heading\">Must-read big data coverage<\/h3>\n<\/aside>\n<p>As something of a de facto standard for data streaming today, Kafka is said to enable some 80% of Fortune 100 companies to handle large volumes and varieties of data in real-time.<\/p>\n<p>But building streaming data pipelines on the open source Kafka requires large teams of highly specialized engineering talent and time-consuming work across multiple tools. This puts pervasive data streaming out of reach for many organizations and leaves data pipelines clogged with stale and outdated data.<\/p>\n<p>Analyst house IDC has said that businesses need to add more streaming use cases, but the lack of developer talent and increasing technical debt stand in the way.<\/p>\n<p>\u201cIn terms of developers, data scientists and all other software engineers working with data streaming technologies, this is quite a new idea for many of them,\u201d explained Kris Jenkins, developer advocate at Confluent. \u201cThis is significant progression onwards from use of a technology like a relational database.\u201d<\/p>\n<p>This all paves the way to a point where firms are able to create a so-called data mesh: A state of operations where every department in a business is able to share its data via the central IT function to aid higher level decision-making at the corporate operational level. In this meshed fabric, other departments are also able to access those real-time data streams \u2014 subject to defined policy access controls \u2014 without needing involvement from the original data originators.<\/p>\n<h2>What does Confluent offer developers?<\/h2>\n<p>In terms of product specifics, Confluent\u2019s Stream Designer provides developers with what its makers call a \u201cflexible point-and-click canvas\u201d to build streaming data pipelines in minutes. It does this through its ability to describe data flows and business logic easily within the GUI.<\/p>\n<p>It takes a developer-centric approach where users with different skills and needs can switch between the UI, a code editor and command-line interface to declaratively build data flow logic. It brings developer-oriented practices to pipelines, making it easier for developers new to Kafka to turn data into business value faster.<\/p>\n<p>With Stream Designer software, teams can avoid spending extended periods managing individual components on open source Kafka. Through one visual interface, developers can build pipelines with the complete Kafka ecosystem and then iterate and test before deployment into production in a modular fashion. There\u2019s no longer a need to work across multiple, discrete components, like Kafka Stream and Kafka Connect, that require their own boilerplate code each time.<\/p>\n<p>After building a pipeline, the next challenge is maintaining and updating it over its lifecycle as business requirements change and tech stack evolves. Stream Designer provides a unified, end-to-end view to make it easy to observe, edit and manage pipelines to keep them up to date.<\/p>\n<h2>CEO Kreps\u2019 market stance<\/h2>\n<p>Taking stock of the current state of what is clearly a still-nascent technology in the ascendancy, how does Kreps feel about his company\u2019s relationship with other enterprise technology vendors?<\/p>\n<p>\u201cWell you know, this is a pretty significant shift in terms of how we all think about data and how we work with data \u2014 and, in real terms, it\u2019s actually impacting all the technologies around it,\u201d said Kreps. \u201cSome of the operational databases vendors are already providing pretty deep integration to us \u2014 and us to them. That\u2019s great for us, as our goal is to enable that connection and make it easy to work with Confluent across all their different systems.\u201d<\/p>\n<p>Will these same enterprise technology vendors now start to create more of their own data streaming solutions and start to come to market with their own approach? And if they do, would Kreps count that in some ways as a compliment to Confluent?<\/p>\n<p>He agrees that there will inevitably be some attempt at replicating functionality. Overall though, he points to \u201ca mindset shift among practitioners\u201d in terms of what they expect and demand out of any new product, so he clearly hopes his firm\u2019s dedicated focus on this space will win through.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Image: Mark\/Adobe Stock Data flows. Even when data comes to rest, gets sent to backup and possibly finds itself in difficult-to-retrieve long-term storage retirement locations, data generally flows from one place to another during its lifetime. When data is in motion, it typically moves between applications and their dependent services. But data obviously also moves [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":67306,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[172,39,40,783],"tags":[],"class_list":["post-67305","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-apache-kafka","category-big-data","category-cloud","category-cloudsync"],"_links":{"self":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts\/67305","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=67305"}],"version-history":[{"count":0,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/posts\/67305\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=\/wp\/v2\/media\/67306"}],"wp:attachment":[{"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=67305"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=67305"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudnewshub.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=67305"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}