Pentaho Data Integration Cookbook Second Edition
Alex Meadows, María Carina Roldán
Format: PDF / Kindle (mobi) / ePub
The premier open source ETL tool is at your command with this recipe-packed cookbook. Learn to use data sources in Kettle, avoid pitfalls, and dig out the advanced features of Pentaho Data Integration the easy way.
- Intergrate Kettle in integration with other components of the Pentaho Business Intelligence Suite, to build and publish Mondrian schemas,create reports, and populatedashboards
- This book contains an organized sequence of recipes packed with screenshots, tables, and tips so you can complete the tasks as efficiently as possible
- Manipulate your data by exploring, transforming, validating, integrating, and performing data analysis
Pentaho Data Integration is the premier open source ETL tool, providing easy, fast, and effective ways to move and transform data. While PDI is relatively easy to pick up, it can take time to learn the best practices so you can design your transformations to process data faster and more efficiently. If you are looking for clear and practical recipes that will advance your skills in Kettle, then this is the book for you.
Pentaho Data Integration Cookbook Second Edition guides you through the features of explains the Kettle features in detail and provides easy to follow recipes on file management and databases that can throw a curve ball to even the most experienced developers.
Pentaho Data Integration Cookbook Second Edition provides updates to the material covered in the first edition as well as new recipes that show you how to use some of the key features of PDI that have been released since the publication of the first edition. You will learn how to work with various data sources – from relational and NoSQL databases, flat files, XML files, and more. The book will also cover best practices that you can take advantage of immediately within your own solutions, like building reusable code, data quality, and plugins that can add even more functionality.
Pentaho Data Integration Cookbook Second Edition will provide you with the recipes that cover the common pitfalls that even seasoned developers can find themselves facing. You will also learn how to use various data sources in Kettle as well as advanced features.
What you will learn from this book
- Configure Kettle to connect to relational and NoSQL databases and web applications like SalesForce, explore them, and perform CRUD operations
- Utilize plugins to get even more functionality into your Kettle jobs
- Embed Java code in your transformations to gain performance and flexibility
- Execute and reuse transformations and jobs in different ways
- Integrate Kettle with Pentaho Reporting, Pentaho Dashboards, Community Data Access, and the Pentaho BI Platform
- Interface Kettle with cloud-based applications
- Learn how to control and manipulate data flows
- Utilize Kettle to create datasets for analytics
Pentaho Data Integration Cookbook Second Edition is written in a cookbook format, presenting examples in the style of recipes.This allows you to go directly to your topic of interest, or follow topics throughout a chapter to gain a thorough in-depth knowledge.
Who this book is written for
Pentaho Data Integration Cookbook Second Edition is designed for developers who are familiar with the basics of Kettle but who wish to move up to the next level.It is also aimed at advanced users that want to learn how to use the new features of PDI as well as and best practices for working with Kettle.
that if you run the statement from outside Spoon, in order to see the changes inside the tool you either have to clear the cache by right-clicking on the database connection and selecting the Clear DB Cache option, or restart Spoon. See also ff Creating or altering a database table from PDI (runtime) Creating or altering a database table from PDI (runtime) When you are developing with PDI, you know (or have the means to find out) if the tables you need exist or not, and if they have all the
database. MySQL's information_schema database also has a table that details the columns of each table (aptly named COLUMNS). For larger databases, you may want to filter just a subset of tables based on given columns or types. While it has been stated before, it bears mentioning again that this technique must be used with extreme caution since it can drastically alter your database depending on the type of query executed! See also ff Getting data from a database by running a query built at
or shut down almost as fast as a simple command. S3 is a scalable storage space that can be shared across virtual instances and is a common location for files to be processed. With this recipe, we will be reading information out of a file in an S3 instance. 107 Reading and Writing Files This recipe will require access to AWS, which does have a free tier for new users. If you have already used AWS in the past and do not have access to the free tier, the recipe will not deal with large transfers
model for the kinds of queries we are going to be running. For this recipe, we want to find the players who attended at a school for a given year. With that in mind, our data model turns into the following: 122 Chapter 3 School row key key ----------------------columns schoolName schoolCity schoolState schoolNick playerID nameFirst nameLast nameNick yearMin yearMax We now have a flat dataset in which to answer our queries. The schema script to run within the HBase Shell is the following:
can also specify if the element is a node or an attribute. In the example, you can set the field id_title as an attribute of the element Book, set Attribute as Y and Attribute parent name as Book, and you will have the following XML structure: