دانلود کتاب Mastering Large Datasets with Python: Parallelize and Distribute Your Python Code
by John T. Wolohan
|
عنوان فارسی: تسلط زیادی داده با Python: Parallelize و توزیع خود را در کد پایتون |
دانلود کتاب
جزییات کتاب
About the technology
Programming techniques that work well on laptop-sized data can slow to a crawl—or fail altogether—when applied to massive files or distributed datasets. By mastering the powerful map and reduce paradigm, along with the Python-based tools that support it, you can write data-centric applications that scale efficiently without requiring codebase rewrites as your requirements change.
About the book
Mastering Large Datasets with Python teaches you to write code that can handle datasets of any size. You’ll start with laptop-sized datasets that teach you to parallelize data analysis by breaking large tasks into smaller ones that can run simultaneously. You’ll then scale those same programs to industrial-sized datasets on a cluster of cloud servers. With the map and reduce paradigm firmly in place, you’ll explore tools like Hadoop and PySpark to efficiently process massive distributed datasets, speed up decision-making with machine learning, and simplify your data storage with AWS S3.
What's inside
• An introduction to the map and reduce paradigm
• Parallelization with the multiprocessing module and pathos framework
• Hadoop and Spark for distributed computing
• Running AWS jobs to process large datasets
About the reader
For Python programmers who need to work faster with more data.
About the author
J. T. Wolohan is a lead data scientist at Booz Allen Hamilton, and a PhD researcher at Indiana University, Bloomington.