Tensorflow machine learning cookbook pdf free download






















Your email address will not be published. Save my name, email, and website in this browser for the next time I comment.

Playboy Vaticano May Playboy Russia December Maximum PC January PC Pro July Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.

Preface xv. Piracy: If you come across any illegal copies of our works in any form on the Internet, we would be grateful if you would provide us with the location address or website name. Please contact us at copyright packt. If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.

Reviews Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book.

Thank you! For more information about Packt, please visit packt. Why is this so important? While it's true that Keras and TensorFlow have had very good compatibility for a while, they have remained separate libraries with different development cycles, which causes frequent compatibility issues. Now that the relationship between these two immensely popular tools is official, they'll grow in the same direction, following a single roadmap and making the interoperability between them completely seamless.

Perhaps the biggest advantage of this merger is that by using Keras' high-level features, we are not sacrificing performance by any means.

Simply put, Keras code is production- ready! Unless the requirements of a particular project demand otherwise, in the vast majority of the recipes in this book, we'll rely on TensorFlow's Keras API. The reason behind this decision is twofold:. Technical requirements For this chapter, you will need a working installation of TensorFlow 2. If you can access a GPU, either physical or via a cloud provider, your experience will be much more enjoyable. In each recipe, in the Getting ready section, you will find the specific preliminary steps and dependencies to complete it.

Working with the basic building blocks of the Keras API 3. Therefore, in this first recipe, we'll review the basic building blocks of Keras by creating a very simple fully connected neural network. Are you ready? Let's begin! Getting ready At the most basic level, a working installation of TensorFlow 2.

How to do it… In the following sections, we'll go over the sequence of steps required to complete this recipe.

Let's get started:. Import the required libraries from the Keras API: from sklearn. Create a model using the add method to add one layer at a time. Create a model using the Functional API. Create a model using an object-oriented approach by sub-classing tensorflow. Prepare the data so that we can train all the models we defined previously. How it works… In the previous section, we went over the basic building blocks we'll need to build most deep learning-powered computer vision projects using TensorFlow 2.

We learned that all Keras-related functionality is located inside the tensorflow package. Next, we found that TensorFlow 2. In particular, we have two main APIs we can use to build models:. The pros of this API are that you can examine the model by plotting it or printing its architecture; compatibility checks are run by the framework, diminishing the probability of runtime errors; and if the model compiles, it runs.

Loading images using the Keras API 7. It also allows for more flexibility in the forward pass than its symbolic counterpart. The pros of this API are that developing models becomes no different than any other object-oriented task, which speeds up the process of trying out new ideas; specifying a control flow is easy using Python's built-in constructs; and it's suited for non-DAG architectures, such as Tree-RNNs.

In terms of its cons, reusability is lost because the architecture is hidden within the class; almost no inter-layer compatibility checks are run, thus moving most of the debugging responsibility from the framework to the developer; and there's loss of transparency because information about the interconnectedness between layers is not available. We defined the same architecture using both the Sequential and Functional APIs, which correspond to the symbolic or declarative way of implementing networks, and also a third time using an imperative approach.

Loading images using the Keras API In this recipe, we will learn how to load images using the Keras API, a very important task considering that, in computer vision, we'll always work with visual data. In particular, we'll learn how to open, explore, and visualize a single image, as well as a batch of them. Additionally, we will learn how to programmatically download a dataset. Getting ready Keras relies on the Pillow library to manipulate images. You can install it easily using pip:. Let's get started!

Import the necessary packages: import glob import os import tarfile import matplotlib. Download and decompress the data. Build the path to the data directory based on the location of the downloaded file.

Only extract the data if it hasn't been extracted already if not os. The output should be as follows: There are , images in the dataset. Display an image using matplotlib: plt.

Figure 1. Load a batch of images using ImageDataGenerator. Although these steps are not required to load images we can manually download and decompress a dataset , it's often a good idea to automate as many steps as we can. Finally, to load batches of images instead of each one individually, we used ImageDataGenerator, which had been configured to also normalize each image.

As a final remark, this last method returns batches of images and labels, but we ignored the latter as we're only interested in the former. Loading images using the tf.

Dataset API In this recipe, we will learn how to load images using the tf. Its functional style interface, as well as its high level of optimization, makes it a better alternative than the traditional Keras API for large projects, where efficiency and performance is a must. First, we need to import all the packages we'll need for this recipe: import os import tarfile import matplotlib. Dataset API Even though the image is now in memory, we must convert it into a format a neural network can work with.

This is certainly not required to execute these steps each time we want to load images, given that we can manually download and decompress a dataset, but it's a good practice to automate as many steps as possible.

To load the images into memory, we created a dataset of their file paths, which enabled us to follow almost the same process to display single or multiple images. We did this using the path to load the image into memory. Then, we decoded it from its source format PNG, in this recipe , converted it into a NumPy array, and then pre-processed it as needed. Finally, we took the first 10 images in the dataset and displayed them with matplotlib.

See also If you want to learn more about the tf. Saving and loading a model Training a neural network is hard work and time-consuming. That's why retraining a model every time is impractical.

The good news is that we can save a network to disk and load it whenever we need it, whether to improve its performance with more training or to use it to make predictions on fresh data. In this recipe, we'll learn about different ways to persist a model. How to do it… In this recipe, we'll train a CNN on mnist just to illustrate our point.

Import everything we will need: import json import numpy as np from sklearn. Normalize data. Reshape grayscale to include channel dimension. Process labels. Define a function for building a network. Compile and train the model for 50 epochs, with a batch size of Feel free to tune these values according to the capacity of your machine: model.

Save the model, along with its weights, in HDF5 format using the save method. Loading model and weights as HDF5. Predicting using loaded model.

Here, we can see that our loaded model obtains Let's take a look at this in more detail. How it works… We just learned how to persist a model to disk and back into memory using TensorFlow's 2. The downside, though, is that we need more function calls, and this effort is rarely worth it.

Visualizing a model's architecture Due to their complexity, one of the most effective ways to debug a neural network is by visualizing its architecture. In this recipe, we'll learn about two different ways we can display a model's architecture:. Getting ready We'll need both Pillow and pydot to generate a visual representation of a network's architecture. We can install both libraries using pip, as follows:. How to do it… Visualizing a model's architecture is pretty easy, as we'll learn in the following steps:.

Import all the required libraries: from PIL import Image from tensorflow. Implement a model using all the layers we imported in the previous step. Notice that we are naming each layer for ease of reference later on. Summarize the model by printing a text representation of its architecture, as follows: print model. Here's the summary. The more parameters a model has, the harder and slower it is to train.

Creating a basic image classifier For it to work, however, we must ensure we have the required dependencies installed; for instance, pydot.

Nevertheless, if we want a more detailed summary of the number of parameters in our network layer-wise, we must invoke the summarize method. Finally, naming each layer is a good convention to follow. This makes the architecture more readable and easier to reuse in the feature because we can simply retrieve a layer by its name. One remarkable application of this feature is neural style transfer.

Creating a basic image classifier We'll close this chapter by implementing an image classifier on Fashion-MNIST, a popular alternative to mnist. This will help us consolidate the knowledge we've acquired from the previous recipes. If, at any point, you need more details on a particular step, please refer to the previous recipes.

Getting ready I encourage you to complete the five previous recipes before tackling this one since our goal is to come full circle with the lessons we've learned throughout this chapter. Also, make sure you have Pillow and pydot on your system. You can install them using pip:. You can install this library with the following command:. Import the necessary packages: import matplotlib.

Define a function that will load and prepare the dataset. It will normalize the data, one-hot encode the labels, take a portion of the training set for validation, and wrap the three data subsets into three separate tf. HistoryPlotter plotter. Consume the training and validation datasets in batches of images at a time. The first graph corresponds to the loss curve both on the training and validation sets:.

The output is as follows: Loss: 0. How it works… In this recipe, we used all the lessons learned in this chapter. Of course, this means that its applications are wide and varied. However, the biggest breakthroughs over the past decade, especially in the context of deep learning applied to visual tasks, have occurred in a particular domain known as image classification.

As the name suggests, image classification consists of the process of discerning what's in an image based on its visual content. Is there a dog or a cat in this image? What number is in this picture? Is the person in this photo smiling or not? Because image classification is such an important and pervasive task in deep learning applied to computer vision, the recipes in this chapter will focus on the ins and outs of classifying images using TensorFlow 2.

We'll cover the following recipes:. Technical requirements Besides a working installation of TensorFlow 2. In each recipe, you'll find the steps and dependencies needed to complete it in the Getting ready section. Creating a binary classifier to detect smiles In its most basic form, image classification consists of discerning between two classes, or signaling the presence or absence of some trait. In this recipe, we'll implement a binary classifier that tells us whether a person in a photo is smiling.

Let's begin, shall we? Clone or download a zipped version of the repository to a location of your preference. Figure 2. Import all necessary packages: import os import pathlib import glob import numpy as np from sklearn. Notice that we are loading the images in grayscale, and we're encoding the labels by checking whether the word positive is in the file path of the image.

Define a function to build the neural network. Because this is a binary classification problem, a single Sigmoid-activated neuron is enough in the output layer. Train the model. In the following section, we'll explain the previous steps. Creating a multi-class classifier to play rock paper scissors How it works… We just trained a network to determine whether a person is smiling or not in a picture.

Our first big task was to take the images in the dataset and load them into a format suitable for our neural network. To extract the label, we looked at the containing folder of each image: if it contained the word positive, we encoded the label as 1; otherwise, we encoded it as 0 a trick we used here was casting a Boolean as a float, like this: float label.

Next, we built the neural network, which is inspired by the LeNet architecture. The biggest takeaway here is that because this is a binary classification problem, we can use a single Sigmoid-activated neuron to discern between the two classes.

Finally, we achieved around Creating a multi-class classifier to play rock paper scissors More often than not, we are interested in categorizing an image into more than two classes.

As we'll see in this recipe, implementing a neural network to differentiate between many categories is fairly straightforward, and what better way to demonstrate this than by training a model that can play the widely known Rock Paper Scissors game? Let's dive in! To download it, you'll need a Kaggle account, so sign in or sign up accordingly. Then, unzip the dataset in a location of your preference. Here are some sample images:.

Import the required packages: import os import pathlib import glob import numpy as np import tensorflow as tf from sklearn. Define a list with the three classes, and also an alias to tf.

Define a function to build the network architecture. Define a function to, given a path to a dataset, return a tf. After epochs, our network achieves around Let's understand what we just did. In this same function, we read an image from disk, decoded it from its JPEG format, converted it to grayscale color information is not necessary in this problem , and then resized it to more manageable dimensions of 32x32x1.

Because this is a multi-class classification task, we use Softmax to activate the outputs. Using the three functions explained here, we prepared three subsets of data, with the purpose of training, validating, and testing the neural network. Creating a multi-label classifier to label watches Finally, we got around Based on these results, you could use this network as a component of a Rock Paper Scissors game to recognize the hand gestures of a player and react accordingly.

Creating a multi-label classifier to label watches A neural network is not limited to modeling the distribution of a single variable. In fact, it can easily handle instances where each image has multiple labels associated with it.

Let's get started. We'll only use a subset of the data, focused on watches, which we'll construct programmatically in the How to do it… section. Import the necessary packages: import os import pathlib from csv import DictReader import glob import numpy as np from sklearn.

Define the paths to the images and the styles. This block prints as follows: Test accuracy: 0. That prints this: Casual: The output is as follows: Ground truth labels: [ 'Casual', 'Women' ]. How it works… We implemented a smaller version of a VGG network, which is capable of performing multi-label, multi-class classification, by modeling independent distributions for the gender and usage metadata associated with each watch.

In other words, we modeled two binary classification problems at the same time: one for gender, and one for usage. Implementing ResNet from scratch Residual Network, or ResNet for short, constitutes one of the most groundbreaking advancements in deep learning. This architecture relies on a component called the residual module, which allows us to ensemble networks with depths that were unthinkable a couple of years ago.

There are variants of ResNet that have more than layers, without any loss of performance! Getting ready We won't explain ResNet in depth, so it is a good idea to familiarize yourself with the architecture if you are interested in the details. Implementing ResNet from scratch Import all necessary modules: import os import numpy as np import tarfile import tensorflow as tf from tensorflow. Define an alias to the tf. Define a function to create a residual module in the ResNet architecture.

Define a function to create a tf. Build, compile, and train a ResNet model. Because this is a time-consuming process, we'll save a version of the model after each epoch, using the ModelCheckpoint callback:. Load the best model in this case, model.

How it works… The key to ResNet is the residual module, which we implemented in Step 3. A residual module is a micro-architecture that can be reused many times to create a macro- architecture, thus achieving great depths. A residual module comprises two branches: the first one is the skip connection, also known as the shortcut branch, which is basically the same as the input. The second or main branch is composed of three convolution blocks: a 1x1 with a quarter of the filters, a 3x3 one, also with a quarter of the filters, and finally another 1x1, which uses all the filters.

The shortcut and main branches are concatenated in the end using the Add layer. We start by applying a 3x3 convolution to the input after being batch normalized. Then we proceed to create the stages. A stage is a series of residual modules connected to each other. The length of the stages list controls the number of stages to create, and each element in this list controls the number of layers in that particular stage. The filters parameter contains the number of filters to use in each residual block within a stage.

Finally, we built a fully connected network, Softmax- activated, on top of the stages with as many units as there are classes in the dataset in this case, Because ResNet is a very deep, heavy, and slow-to-train architecture, we checkpointed the model after each epoch.

Classifying images with a pre-trained network using the Keras API We do not always need to train a classifier from scratch, especially when the images we want to categorize resemble ones that another network trained on. In these instances, we can simply reuse the model, saving ourselves lots of time.

In this recipe, we'll use a pre-trained network on ImageNet to classify a custom image. You're free to use your own images in the recipe. Classifying images with a pre-trained network using the Keras API How to do it… As we'll see in this section, re-using a pre-trained classifier is very easy! Import the required packages. These include the pre-trained network used for classification, as well as some helper functions to pre process the images: import matplotlib.

Load the image to classify. This produces the following output: 1. This will download the architecture and the weights if it is the first time we are using them; otherwise, a version of these files will be cached in our system. Then, we loaded the image we wanted to classify, resized it to dimensions compatible with InceptionV3 xx3 , converted it into a singleton batch with np. This matrix contains the label and probabilities in the 0th row, which we inspected to get the five most probable classes.

Classifying images with a pre-trained network using TensorFlow Hub TensorFlow Hub TFHub is a repository of hundreds of machine learning models contributed to by the big and rich community that surrounds TensorFlow. Here we can find models for a myriad of different tasks, not only for computer vision but for applications in many different domains, such as Natural Language Processing NLP and reinforcement learning. In this recipe, we'll use a model trained on ImageNet, hosted on TFHub, to make predictions on a custom image.

Getting ready We'll need the tensorflow-hub and Pillow packages, which can be easily installed using pip, as follows:. Here's the image we'll classify:. Classifying images with a pre-trained network using TensorFlow Hub Plot the original image with its most probable label: plt. How it works… After importing the relevant packages, we proceeded to define the URL of the model we wanted to use to classify our input image.

To download and convert such a network into a Keras model, we used the convenient hub. KerasLayer class in Step 3. Then, in Step 4, we loaded the image we wanted to classify into memory, making sure its dimensions match the ones the network expects: xx3. Steps 5 and 6 perform the classification and extract the most probable category, respectively. However, to make this prediction human-readable, we downloaded a plain text file with all ImageNet labels in Step 7, which we then parsed using numpy, allowing us to use the index of the most probable category to obtain the corresponding label, finally displayed in Step 10 along with the input image.

Using data augmentation to improve performance with the Keras API More often than not, we can benefit from providing more data to our model. But data is expensive and scarce.

Is there a way to circumvent this limitation? Yes, there is! We can synthesize new training examples by performing little modifications on the ones we already have, such as random rotations, random cropping, and horizontal flipping, among others.

In this recipe, we'll learn how to use data augmentation with the Keras API to improve performance. Let's begin. Here are sample images from Caltech Using data augmentation to improve performance with the Keras API How to do it… The steps listed here are necessary to complete the recipe.

Import the required modules: import os import pathlib import matplotlib. The accuracy on the test set is as follows: Test accuracy: 0. The accuracy on the test set when we use data augmentation is as follows: Test accuracy: 0. Let's understand better what we did in the next section.

We do not store files not owned by us, or without the permission of the owner. We also do not have links that lead to sites DMCA copyright infringement. If You feel that this book is belong to you and you want to unpublish it, Please Contact us. Download e-Book. Posted on.



0コメント

  • 1000 / 1000