جزییات کتاب
آپاچی اسپارک محاسباتی خوشه ای سبک و سریع برای محاسبات سریع طراحی شده است. اسپارک در لایه بالایی Hadoop MapReduce می باشد و مدل MapReduce را برای موثر بودن انواع بیشتری از محاسباتی که شامل کوئری های تعاملی (Interactive Queries) و جریان پردازش (Stream Processing) می باشد، گسترش می دهد. داده های عظیم، ابر داده، بزرگداده یا داده های بزرگ (Big Data) اصطلاحی است که به مجموعه داده هایی اطلاق میشود که مدیریت، کنترل و پردازش آنها فراتر از توانایی ابزارهای نرمافزاری در یک زمان قابل تحمل و مورد انتظار است. مقیاس بزرگ داده، به طور مداوم در حال رشد از محدوده چند ۱۰ ترابایت به چندین پتابایت، در یک مجموعه داده واحد است. نـمونههایی از بزرگ داده، گــزارش های وبی، سامانه های بازشناسی با امواج رادیویی، شبکههای حسگر، شبکههای اجتماعی، متون و اسناد اینترنتی، نمایههای جستجوهای اینترنتی، نجوم، مدارک پزشکی، آرشیو عکس، آرشیو ویدیو، پژوهشهای زمینشناسی و تجارت در مقیاس بزرگ هستند.
در دوره آموزشی Packt Apache Spark 2 for Beginners با ویژگی ها و امکانات مقدماتی و اولیه آپاچی اسپارک 2 آشنا می شوید.
سرفصل های دوره آموزشی Packt Apache Spark 2 for Beginners:
1: اصول و مبانی اسپارک
2: مدل برنامه اسپارک
3: اسپارک SQL
4: اسپارک برنامه نویسی با R
5: اسپارک تجزیه و تحلیل داده با پایتون
6: اسپارک پردازش جریان
7: اسپارک یادگیری ماشین
8: اسپارک پردازش نمودار
9: طراحی برنامه های کاربردی اسپارک
Key FeaturesThis book offers an easy introduction to the Spark framework published on the latest version of Apache Spark 2Perform efficient data processing, machine learning and graph processing using various Spark componentsA practical guide aimed at beginners to get them up and running with SparkBook DescriptionSpark is one of the most widely-used large-scale data processing engines and runs extremely fast. It is a framework that has tools that are equally useful for application developers as well as data scientists.This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup. Then the Spark programming model is introduced through real-world examples followed by Spark SQL programming with DataFrames. An introduction to SparkR is covered next. Later, we cover the charting and plotting features of Python in conjunction with Spark data processing. After that, we take a look at Spark's stream processing, machine learning, and graph processing libraries. The last chapter combines all the skills you learned from the preceding chapters to develop a real-world Spark application.By the end of this book, you will have all the knowledge you need to develop efficient large-scale applications using Apache Spark.What you will learnGet to know the fundamentals of Spark 2 and the Spark programming model using Scala and PythonKnow how to use Spark SQL and DataFrames using Scala and PythonGet an introduction to Spark programming using RPerform Spark data processing, charting, and plotting using PythonGet acquainted with Spark stream processing using Scala and PythonBe introduced to machine learning using Spark MLlibGet started with graph processing using the Spark GraphXBring together all that you've learned and develop a complete Spark applicationAbout the AuthorRajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.Table of ContentsSpark FundamentalsSpark Programming ModelSpark SQLSpark Programming with RSpark Data Analysis with PythonSpark Stream ProcessingSpark Machine LearningSpark Graph ProcessingDesigning Spark Applications