Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-how-to-scaffold-a-new-module-in-odoo-11
Sugandha Lahoti
25 May 2018
2 min read
Save for later

How to Scaffold a New module in Odoo 11

Sugandha Lahoti
25 May 2018
2 min read
The latest version of Odoo ERP, Odoo 11, brings a plethora of features to Odoo targeting business application development. The market for Odoo is growing enormously and if you have thought about developing in Odoo, now is the best time to start. This hands-on video course, Odoo 11 Development Essentials, by Riste Kabranov, will help you get started with Odoo to build powerful applications. What is Scaffolding? With Scaffolding, you can automatically create a skeleton structure to simplify bootstrapping of new modules in Odoo. Since it’s an automatic process, you don’t need to spend efforts in setting up basic structures and look for starting requirements. Oddo has a scaffold command that creates the skeleton for a new module based on a template. By default, the new module is created in the current working directory, but we can provide a specific directory where to create the module, passing it as an additional parameter. A step-by-step guide to scaffold a new module in Odoo 11: Step 1 In the first step, you need to navigate to /opt/odoo/odoo and create a folder name custom_addons. Step 2 In the second step, you scaffold a new module into the custom_addons. For this, Locate odoo-bin Use ./odoo-bin scaffold module_name folder_name to scaffold a new empty module Check if the new module is there and consists all the files needed. Check out the video for a more detailed walkthrough! This video tutorial has been taken from Odoo 11 Development Essentials. To learn how to build and customize business applications with Odoo, buy the full video course. ERP tool in focus: Odoo 11 Building Your First Odoo Application Top 5 free Business Intelligence tools
Read more
  • 0
  • 0
  • 41854

article-image-building-our-first-poky-image-raspberry-pi
Packt
14 Apr 2016
12 min read
Save for later

Building Our First Poky Image for the Raspberry Pi

Packt
14 Apr 2016
12 min read
In this article by Pierre-Jean TEXIER, the author of the book Yocto for Raspberry Pi, covers basic concepts of the Poky workflow. Using the Linux command line, we will proceed to different steps, download, configure, and prepare the Poky Raspberry Pi environment and generate an image that can be used by the target. (For more resources related to this topic, see here.) Installing the required packages for the host system The steps necessary for the configuration of the host system depend on the Linux distribution used. It is advisable to use one of the Linux distributions maintained and supported by Poky. This is to avoid wasting time and energy in setting up the host system. Currently, the Yocto project is supported on the following distributions: Ubuntu 12.04 (LTS) Ubuntu 13.10 Ubuntu 14.04 (LTS) Fedora release 19 (Schrödinger's Cat) Fedora release 21 CentOS release 6.4 CentOS release 7.0 Debian GNU/Linux 7.0 (Wheezy) Debian GNU/Linux 7.1 (Wheezy) Debian GNU/Linux 7.2 (Wheezy) Debian GNU/Linux 7.3 (Wheezy) Debian GNU/Linux 7.4 (Wheezy) Debian GNU/Linux 7.5 (Wheezy) Debian GNU/Linux 7.6 (Wheezy) openSUSE 12.2 openSUSE 12.3 openSUSE 13.1 Even if your distribution is not listed here, it does not mean that Poky will not work, but the outcome cannot be guaranteed. If you want more information about Linux distributions, you can visitthis link: http://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html Poky on Ubuntu The following list shows required packages by function, given a supported Ubuntu or Debian Linux distribution. The dependencies for a compatible environment include: Download tools: wget and git-core Decompression tools: unzip and tar Compilation tools: gcc-multilib, build-essential, and chrpath String-manipulation tools: sed and gawk Document-management tools: texinfo, xsltproc, docbook-utils, fop, dblatex, and xmlto Patch-management tools: patch and diffstat Here is the command to type on a headless system: $ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath  Poky on Fedora If you want to use Fedora, you just have to type this command: $ sudo yum install gawk make wget tar bzip2 gzip python unzip perl patch diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath ccache perl-Data-Dumper perl-Text-ParseWords perl-Thread-Queue socat Downloading the Poky metadata After having installed all the necessary packages, it is time to download the sources from Poky. This is done through the git tool, as follows: $ git clone git://git.yoctoproject.org/poky (branch master) Another method is to download tar.bz2 file directly from this repository: https://www.yoctoproject.org/downloads To avoid all hazardous and problematic manipulations, it is strongly recommended to create and switch to a specific local branch. Use these commands: $ cd poky $ git checkout daisy –b work_branch Downloading the Raspberry Pi BSP metadata At this stage, we only have the base of the reference system (Poky), and we have no support for the Broadcom BCM SoC. Basically, the BSP proposed by Poky only offers the following targets: $ ls meta/conf/machine/*.conf beaglebone.conf edgerouter.conf genericx86-64.conf genericx86.conf mpc8315e-rdb.conf This is in addition to those provided by OE-Core: $ ls meta/conf/machine/*.conf qemuarm64.conf qemuarm.conf qemumips64.conf qemumips.conf qemuppc.conf qemux86-64.conf qemux86.conf In order to generate a compatible system for our target, download the specific layer (the BSP Layer) for the Raspberry Pi: $ git clone git://git.yoctoproject.org/meta-raspberrypi If you want to learn more about git scm, you can visit the official website: http://git-scm.com/ Now we can verify whether we have the configuration metadata for our platform (the rasberrypi.conf file): $ ls meta-raspberrypi/conf/machine/*.conf raspberrypi.conf This screenshot shows the meta-raspberrypi folder: The examples and code presented in this article use Yocto Project version 1.7 and Poky version 12.0. For reference,the codename is Dizzy. Now that we have our environment freshly downloaded, we can proceed with its initialization and the configuration of our image through various configurations files. The oe-init-build-env script As can be seen in the screenshot, the Poky directory contains a script named oe-init-build-env. This is a script for the configuration/initialization of the build environment. It is not intended to be executed but must be "sourced". Its work, among others, is to initialize a certain number of environment variables and place yourself in the build directory's designated argument. The script must be run as shown here: $ source oe-init-build-env [build-directory] Here, build-directory is an optional parameter for the name of the directory where the environment is set (for example, we can use several build directories in a single Poky source tree); in case it is not given, it defaults to build. The build-directory folder is the place where we perform the builds. But, in order to standardize the steps, we will use the following command throughout to initialize our environment: $ source oe-init-build-env rpi-build ### Shell environment set up for builds. ### You can now run 'bitbake <target>' Common targets are:     core-image-minimal     core-image-sato     meta-toolchain     adt-installer     meta-ide-support You can also run generated qemu images with a command like 'runqemu qemux86' When we initialize a build environment, it creates a directory (the conf directory) inside rpi-build. This folder contain two important files: local.conf: It contains parameters to configure BitBake behavior. bblayers.conf: It lists the different layers that BitBake takes into account in its implementation. This list is assigned to the BBLAYERS variable. Editing the local.conf file The local.conf file under rpi-build/conf/ is a file that can configure every aspect of the build process. It is through this file that we can choose the target machine (the MACHINE variable), the distribution (the DISTRO variable), the type of package (the PACKAGE_CLASSES variable), and the host configuration (PARALLEL_MAKE, for example). The minimal set of variables we have to change from the default is the following: BB_NUMBER_THREADS ?= "${@oe.utils.cpu_count()}" PARALLEL_MAKE ?= "-j ${@oe.utils.cpu_count()}" MACHINE ?= raspberrypi MACHINE ?= "raspberrypi" The BB_NUMBER_THREADS variable determines the number of tasks that BitBake will perform in parallel (tasks under Yocto; we're not necessarily talking about compilation). By default, in build/conf/local.conf, this variable is initialized with ${@oe.utils.cpu_count()},corresponding to the number of cores detected on the host system (/proc/cpuinfo). The PARALLEL_MAKE variable corresponds to the -j of the make option to specify the number of processes that GNU Make can run in parallel on a compilation task. Again, it is the number of cores present that defines the default value used. The MACHINE variable is where we determine the target machine we wish to build for the Raspberry Pi (define in the .conf file; in our case, it is raspberrypi.conf). Editing the bblayers.conf file Now, we still have to add the specific layer to our target. This will have the effect of making recipes from this layer available to our build. Therefore, we should edit the build/conf/bblayers.conf file: # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly LCONF_VERSION = "6" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto   /home/packt/RASPBERRYPI/poky/meta-yocto-bsp   " BBLAYERS_NON_REMOVABLE ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto " Add the following line: # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly LCONF_VERSION = "6" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto   /home/packt/RASPBERRYPI/poky/meta-yocto-bsp   /home/packt/RASPBERRYPI/poky/meta-raspberrypi   " BBLAYERS_NON_REMOVABLE ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto " Naturally, you have to adapt the absolute path (/home/packt/RASPBERRYPI here) depending on your own installation. Building the Poky image At this stage, we will have to look at the available images as to whether they are compatible with our platform (.bb files). Choosing the image Poky provides several predesigned image recipes that we can use to build our own binary image. We can check the list of available images by running the following command from the poky directory: $ ls meta*/recipes*/images/*.bb All the recipes provide images which are, in essence, a set of unpacked and configured packages, generating a filesystem that we can use on actual hardware (for further information about different images, you can visit http://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#ref-images). Here is a small representation of the available images: We can add the layers proposed by meta-raspberrypi to all of these layers: $ ls meta-raspberrypi/recipes-core/images/*.bb rpi-basic-image.bb rpi-hwup-image.bb rpi-test-image.bb Here is an explanation of the images: rpi-hwup-image.bb: This is an image based on core-image-minimal. rpi-basic-image.bb: This is an image based on rpi-hwup-image.bb, with some added features (a splash screen). rpi-test-image.bb: This is an image based on rpi-basic-image.bb, which includes some packages present in meta-raspberrypi. We will take one of these three recipes for the rest of this article. Note that these files (.bb) describe recipes, like all the others. These are organized logically, and here, we have the ones for creating an image for the Raspberry Pi. Running BitBake At this point, what remains for us is to start the build engine Bitbake, which will parse all the recipes that contain the image you pass as a parameter (as an initial example, we can take rpi-basic-image): $ bitbake rpi-basic-image Loading cache: 100% |########################################################################################################################################################################| ETA:  00:00:00 Loaded 1352 entries from dependency cache. NOTE: Resolving any missing task queue dependencies Build Configuration: BB_VERSION        = "1.25.0" BUILD_SYS         = "x86_64-linux" NATIVELSBSTRING   = "Ubuntu-14.04" TARGET_SYS        = "arm-poky-linux-gnueabi" MACHINE           = "raspberrypi" DISTRO            = "poky" DISTRO_VERSION    = "1.7" TUNE_FEATURES     = "arm armv6 vfp" TARGET_FPU        = "vfp" meta              meta-yocto        meta-yocto-bsp    = "master:08d3f44d784e06f461b7d83ae9262566f1cf09e4" meta-raspberrypi  = "master:6c6f44136f7e1c97bc45be118a48bd9b1fef1072" NOTE: Preparing RunQueue NOTE: Executing SetScene Tasks NOTE: Executing RunQueue Tasks Once launched, BitBake begins by browsing all the (.bb and .bbclass)files that the environment provides access to and stores the information in a cache. Because the parser of BitBake is parallelized, the first execution will always be longer because it has to build the cache (only about a few seconds longer). However, subsequent executions will be almost instantaneous, because BitBake will load the cache. As we can see from the previous command, before executing the task list, BitBake displays a trace that details the versions used (target, version, OS, and so on). Finally, BitBake starts the execution of tasks and shows us the progress. Depending on your setup, you can go drink some coffee or even eat some pizza. Usually after this, , if all goes well, you will be pleased to see that the tmp/subdirectory's directory construction (rpi-build) is generally populated. The build directory (rpi-build) contains about20 GB after the creation of the image. After a few hours of baking, we can rejoice with the result and the creation of the system image for our target: $ ls rpi-build/tmp/deploy/images/raspberrypi/*sdimg rpi-basic-image-raspberrypi.rpi-sdimg This is this file that we will use to create our bootable SD card. Creating a bootable SD card Now that our environment is complete, you can create a bootable SD card with the following command (remember to change /dev/sdX to the proper device name and be careful not to kill your hard disk by selecting the wrong device name): $ sudo dd if=rpi-basic-image-raspberrypi.rpi-sdimg of=/dev/sdX bs=1M Once the copying is complete, you can check whether the operation was successful using the following command (look at mmcblk0): $ lsblk NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT mmcblk0     179:0    0   3,7G  0 disk ├─mmcblk0p1 179:1    0    20M  0 part /media/packt/raspberrypi └─mmcblk0p2 179:2    0   108M  0 part /media/packt/f075d6df-d8b8-4e85-a2e4-36f3d4035c3c You can also look at the left-hand side of your interface: Booting the image on the Raspberry Pi This is surely the most anticipated moment of this article—the moment where we boot our Raspberry Pi with a fresh Poky image. You just have to insert your SD card in a slot, connect the HDMI cable to your monitor, and connect the power supply (it is also recommended to use a mouse and a keyboard to shut down the device, unless you plan on just pulling the plug and possibly corrupting the boot partition). After connectingthe power supply, you could see the Raspberry Pi splash screen: The login for the Yocto/Poky distribution is root. Summary In this article, we learned the steps needed to set up Poky and get our first image built. We ran that image on the Raspberry Pi, which gave us a good overview of the available capabilities. Resources for Article:   Further resources on this subject: Programming on Raspbian [article] Working with a Webcam and Pi Camera [article] Creating a Supercomputer [article]
Read more
  • 0
  • 1
  • 41800

article-image-getting-started-rstudio
Packt
16 Feb 2016
5 min read
Save for later

Getting Started with RStudio

Packt
16 Feb 2016
5 min read
The number of users adopting the R programming language has been increasing faster and faster in the last few years. The functions of the R console are limited when it comes to managing a lot of files, or when we want to work with version control systems. This is the reason, in combination with the increasing adoption rate, why a need for a better development environment arose. To serve this need, a team of R fans began to develop an integrated development environment (IDE) to make it easier to work on bigger projects and to collaborate with others. This IDE has the name, RStudio. In this article, we will see how to work with RStudio and projects (For more resources related to this topic, see here.) Working with RStudio and projects In the times before RStudio, it was very hard to manage bigger projects with R in the R console, as you had to create all the folder structures on your own. When you work with projects or open a project, RStudio will instantly take several actions. For example, it will start a new and clean R session, it will source the .Rprofile file in the project's main directory, and it will set the current working directory to the project directory. So, you have a complete working environment individually for every project. RStudio will even adjust its own settings, such as active tabs, splitter positions, and so on, to where they were when the project was closed. But just because you can create projects with RStudio easily, it does not mean that you should create a project for every single time that you write R code. For example, if you just want to do a small analysis, we would recommend that you create a project where you save all your smaller scripts. Creating a project with RStudio RStudio offers you an easy way to create projects. Just navigate to File | New Project and you will see a popup window as with the options shown in the following screenshot: These options let you decide from where you want to create your project. So, if you want to start it from scratch and create a new directory, associate your new project to an existing one, or if you want to create a project from a version control repository, you can avail of the respective options. For now, we will focus on creating a new directory. The following screenshot shows you the next options available: Locating your project A very important question you have to ask yourself when creating a new project is where you want to save it? There are several options and details you have to pay attention to especially when it comes to collaboration and different people working on the same project. You can save your project locally, on a cloud storage or with the help of a revision control system such as Git. Creating your first project To begin your first project, choose the New Directory option we described before and create an empty project. Then, choose a name for the directory and the location that you want to save it in. You should create a projects folder on your Dropbox. The first project will be a small data analysis based on a dataset that was extracted from the 1974 issue of the Motor Trend US magazine. It comprises fuel consumption and ten aspects of automobile design and performance, such as the weight or number of cylinders for 32 automobiles, and is included in the base R package. So, we do not have to install a separate package to work with this dataset, as it is automatically loaded when you start R: As you can see, we left the Use packrat with this project option unchecked. Packrat is a dependency management tool that makes your R code more isolated, portable, and reproducible by giving your project its own privately managed package library. This is especially important when you want to create projects in an organizational context where the code has to run on various computer systems, and has to be usable for a lot of different users. This first project will just run locally and will not focus on a specific combination of package versions. Organizing your folders RStudio creates an empty directory for you that includes just the file, Motor-Car-Trend-Analysis.Rproj. This file will store all the information on your project that RStudio will need for loading. But to stay organized, we have to create some folders in the directory. Create the following folders: data: This includes all the data that we need for our analysis code: This includes all the code files for cleaning up data, generating plots, and so on plots: This includes all graphical outputs reports: This comprises all the reports that we create from our dataset Saving the data The Motor Trend Car Road Tests dataset is part of the dataset package, which is one of the preinstalled packages in R. But, we will save the data in a CSV file in our data folder, after extracting the data from the mtcars variable, to make sure our analysis is reproducible. Put the following line of code in a new R script and save it as data.R in the code folder: #write data into csv file write.csv(mtcars, file = "data/cars.csv", row.names=FALSE) Analyzing the data The analysis script will first have to load the data from the CSV file with the following line: cars_data <- read.csv(file = "data/cars.csv", header = TRUE, sep = ",") Summary To learn more about RStudio, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) R Data Analysis Cookbook (https://www.packtpub.com/big-data-and-business-intelligence/r-data-analysis-cookbook) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r) Resources for Article: Further resources on this subject: RefresheR [article] Deep learning in R [article] Aspects of Data Manipulation in R [article]
Read more
  • 0
  • 0
  • 41798

article-image-anatomy-of-an-azure-function-app-tutorial
Sugandha Lahoti
18 Nov 2018
8 min read
Save for later

Anatomy of an Azure function App [Tutorial]

Sugandha Lahoti
18 Nov 2018
8 min read
Azure functions help you easily run small pieces of code in the cloud without worrying about a whole application or the infrastructure to run it. With Azure functions, you can use triggers to execute your code and bindings to simplify the input and output of your code. In this article, we will look at the anatomy and structure of an Azure Function. This article is taken from the book Learning Azure Functions by Manisha Yadav and Mitesh Soni. In this book, you will you learn the techniques of scaling your Azure functions and making the most of serverless architecture. In this article, we will cover the following topics: Anatomy of Azure Functions Setting up a basic Azure Function Anatomy of Azure Functions Let's understand the different components or resources that are used while creating Azure Functions. The following image describes the Functions in Azure in different pricing plans: Azure Function App A function app is a collection of one or more functions that are managed together. All the functions in a Function App share the same pricing plan and it can be a consumption plan or an App Service plan. When we utilize Visual Studio Team Services for Continuous Integration and Continuous Delivery using build and release definitions, then the Function app is also shared. The way we manage different resources in Azure with the Azure Resource Group is similar to how we can manage multiple functions with the Function App. Function code In this article, we will consider a scenario where a photography competition is held and photographers need to upload photographs to the portal. The moment a photograph is uploaded, a thumbnail should be created immediately. A function code is the main content that executes and performs some operations as shown as follows: var Jimp = require("jimp"); // JavaScript function must export a single function via module.exports // To find the function and execute it module.exports = (context, myBlob) => { // context is a must have parameter and first parameter always // context is used to pass data to and from the function //context name is not fixed; it can be anything // Read Photograph with Jimp Jimp.read(myBlob).then((image) => { // Manipulate Photograph // resize the Photograph. Jimp.AUTO can be passed as one of the values. image .resize(200, 200) .quality(40) .getBuffer(Jimp.MIME_JPEG, (error, stream) => { // Check for errors while processing the Photograph. if (error) { // To print the message on log console context.log('There was an error processing the Photograph.'); // To communicate with the runtime that function is finished to avoid timeout context.done(error); } else { // To print the message on log console context.log('Successfully processed the Photograph'); // To communicate with the runtime that function is finished to avoid timeout // Bind the stream to the output binding to create a new blob context.done(null, stream); } }); }); }; Function configuration Function configuration defines the function bindings and other configuration settings. It contains configurations such as the type of trigger, paths for blob containers, and so on: { "bindings": [ { "name": "myBlob", "type": "blobTrigger", "direction": "in", "path": "photographs/{name}", "connection": "origphotography2017_STORAGE", "dataType": "binary" }, { "type": "blob", "name": "$return", "path": "thumbnails/{name}", "connection": "origphotography2017_STORAGE", "direction": "out" } ], "disabled": false } The function runtime uses this configuration file to decide which events to monitor and how to pass data to and from the function execution. Function settings We can limit the daily usage quota and application settings. We can enable Azure Function proxies and change the edit mode of our function app. The application settings in the Function App are similar to the application settings in Azure App Services. We can configure .NET Framework v4.6, Java version, Platform, ARR Affinity, remote debugging, remote Visual Studio version, app settings, and connection strings. Runtime The runtime is responsible for executing function code on the underlying WebJobs SDK host. In the next section, we will create our Function App and functions using the Azure Portal. Setting up a basic Azure Function Let's understand Azure Functions and create one in Azure Portal. Go to https://portal.azure.com. Click on Function Apps in the left sidebar: There is no Function App available as of now. Click on the plus + sign and search for Function Apps. Then click on Create: Provide the App name, Subscription details, and existing Resource Group. Select Consumption Plan in Hosting Plan. Then select Location: Select Create New in Storage and click on Create: Now, let's go to Function Apps in the left sidebar and verify whether the recently created Function App is available in the list or not: Click on the Function App and we can see the details related to the Subscription, Resource group, URL, Location, App Service Plan / pricing tier. We can stop or restart the function from the same pane. The Settings tab provides details on the Runtime version, Application settings, and the limit on daily usage: It also allows us to keep the Function App in Read/Write or Read Only mode. We can also enable deployment slots, a well-known feature of Azure App Services: In the Platform features tab as shown below, we get different kinds of options to enable the Function App with MONITORING, NETWORKING, DEPLOYMENT TOOLS, and so on. We will cover most of the features in this chapter and in upcoming chapters in detail: Click on Properties. Verify the different details that are available: There is a property named OUTBOUND IP ADDRESSES which is useful if we need the IP addresses of the Function App for whitelisting. Click on App Service plan and it will open a consumption plan in the pane: On the left sidebar in the Azure Portal, go to Storage services and verify the storage accounts that are available: What we want to achieve is that when we upload an image in a specific blob container, the function should be available immediately in the Function App and should be executed and create a thumbnail in another blob container. Create a new storage account: Go to the Overview section of the Storage accounts and check all the available settings: Click on the Containers section in the Storage accounts: There is no container available in the Storage accounts. Click on + Container and fill in the Name and Access type and click on OK: Similarly, create another container to store thumbnails: Verify both containers in the Storage accounts: Once we have all the components ready to achieve our main objective of creating a function that creates thumbnails of photographs, we can start creating a function: Click on the Functions section in the Function App: Select Webhook + API and then choose a language: Click on Custom function so that we can utilize the already available templates: Select the Language as JavaScript: Select the BlobTrigger template: Provide the name of our function. Give the path to the container for the source and select Storage account connection: Look at the function and code available in the code editor in the Microsoft Azure Portal: Before we write the actual code in the function, let's configure the triggers and outputs: Select Blob parameter name, Storage account connection, and Path: Click on New Output. Select Azure Blob Storage: Select Blob parameter name, Storage account connection, and Path. Click on Save: Review the final output bindings: Click on the advanced editor link to review function.json: Now, paste the function code for creating a thumbnail into the Functions code editor: Now, once everything is set and configured, let's upload a photograph in the photographs blob container: Click on the container and click on Upload. Select the photograph that needs to be uploaded to the container. Click on Upload: Go to Function Apps and check the logs. We may get an Error: Cannot find module 'jimp': 2017-06-30T16:54:18 Welcome, you are now connected to log-streaming service. 2017-06-30T16:54:41.202 Script for function 'photoProcessing' changed. Reloading. 2017-06-30T16:55:01.309 Function started (Id=411e4d84-5ef0-4ca9-b963-ed94c0ba8e84) 2017-06-30T16:55:01.371 Function completed (Failure, Id=411e4d84-5ef0-4ca9-b963-ed94c0ba8e84, Duration=59ms) 2017-06-30T16:55:01.418 Exception while executing function: Functions.photoProcessing. mscorlib: Error: Cannot find module 'jimp' at Function.Module._resolveFilename (module.js:455:15) at Function.Module._load (module.js:403:25) at Module.require (module.js:483:17) at require (internal/module.js:20:19) at Object.<anonymous> (D:\home\site\wwwroot\photoProcessing\index.js:1:74) at Module._compile (module.js:556:32) at Object.Module._extensions..js (module.js:565:10) at Module.load (module.js:473:32) at tryModuleLoad (module.js:432:12) at Function.Module._load (module.js:424:3). Thus we created our first function that processes a photograph and creates a thumbnail for a photography competition scenario. We also saw the anatomy of Azure Functions. To know more on how triggers can activate a function, and how binding can be used to output the results of a function, read our book Learning Azure Functions. Azure DevOps outage root cause analysis starring greedy threads and rogue scale units. Working with Azure container service cluster [Tutorial]. Implementing Identity Security in Microsoft Azure [Tutorial].
Read more
  • 0
  • 0
  • 41748

article-image-building-a-web-service-with-laravel-5
Kunal Chaudhari
24 Apr 2018
15 min read
Save for later

Building a Web Service with Laravel 5

Kunal Chaudhari
24 Apr 2018
15 min read
A web service is an application that runs on a server and allows a client (such as a browser) to remotely write/retrieve data to/from the server over HTTP. In this article we will be covering the following set of topics: Using Laravel to create a web service Writing database migrations and seed files Creating API endpoints to make data publicly accessible Serving images from Laravel The interface of a web service will be one or more API endpoints, sometimes protected with authentication, that will return data in an XML or JSON payload: Web services are a speciality of Laravel, so it won't be hard to create one for Vuebnb. We'll use routes for our API endpoints and represent the listings with Eloquent models that Laravel will seamlessly synchronize with the database: Laravel also has inbuilt features to add API architectures such as REST, though we won't need this for our simple use case. Mock data The mock listing data is in the file database/data.json. This file includes a JSON- encoded array of 30 objects, with each object representing a different listing. Having built the listing page prototype, you'll no doubt recognize a lot of the same properties on these objects, including the title, address, and description. database/data.json: [ { "id": 1, "title": "Central Downtown Apartment with Amenities", "address": "...", "about": "...", "amenity_wifi": true, "amenity_pets_allowed": true, "amenity_tv": true, "amenity_kitchen": true, "amenity_breakfast": true, "amenity_laptop": true, "price_per_night": "$89" "price_extra_people": "No charge", "price_weekly_discount": "18%", "price_monthly_discount": "50%", }, { "id": 2, ... }, ... ] Each mock listing includes several images of the room as well. Images aren't really part of a web service, but they will be stored in a public folder in our app to be served as needed. Database Our web service will require a database table for storing the mock listing data. To set this up we'll need to create a schema and migration. We'll then create a seeder that will load and parse our mock data file and insert it into the database, ready for use in the app. Migration A migration is a special class that contains a set of actions to run against the database, such as creating or modifying a database table. Migrations ensure your database gets set up identically every time you create a new instance of your app, for example, installing in production or on a teammate's machine. To create a new migration, use the make:migration Artisan CLI command. The argument of the command should be a snake-cased description of what the migration will do: $ php artisan make:migration create_listings_table You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. 2017_06_20_133317_create_listings_table.php: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateListingsTable extends Migration { public function up() { // } public function down() { // } } Schema A schema is a blueprint for the structure of a database. For a relational database such as MySQL, the schema will organize data into tables and columns. In Laravel, schemas are declared by using the Schema facade's create method. We'll now make a schema for a table to hold Vuebnb listings. The columns of the table will match the structure of our mock listing data. Note that we set a default false value for the amenities and allow the prices to have a NULL value. All other columns require a value. The schema will go inside our migration's up method. We'll also fill out the down with a call to Schema::drop. 2017_06_20_133317_create_listings_table.php: public function up() { Schema::create('listings', function (Blueprint $table) { $table->primary('id'); $table->unsignedInteger('id'); $table->string('title'); $table->string('address'); $table->longText('about'); // Amenities $table->boolean('amenity_wifi')->default(false); $table->boolean('amenity_pets_allowed')->default(false); $table->boolean('amenity_tv')->default(false); $table->boolean('amenity_kitchen')->default(false); $table->boolean('amenity_breakfast')->default(false); $table->boolean('amenity_laptop')->default(false); // Prices $table->string('price_per_night')->nullable(); $table->string('price_extra_people')->nullable(); $table->string('price_weekly_discount')->nullable(); $table->string('price_monthly_discount')->nullable(); }); } public function down() { Schema::drop('listings'); } A facade is an object-oriented design pattern for creating a static proxy to an underlying class in the service container. The facade is not meant to provide any new functionality; its only purpose is to provide a more memorable and easily readable way of performing a common action. Think of it as an object-oriented helper function. Execution Now that we've set up our new migration, let's run it with this Artisan command: $ php artisan migrate You should see an output like this in the Terminal: Migrating: 2017_06_20_133317_create_listings_table Migrated:            2017_06_20_133317_create_listings_table To confirm the migration worked, let's use Tinker to show the new table structure. If you've never used Tinker, it's a REPL tool that allows you to interact with a Laravel app on the command line. When you enter a command into Tinker it will be evaluated as if it were a line in your app code. Firstly, open the Tinker shell: $ php artisan tinker Now enter a PHP statement for evaluation. Let's use the DB facade's select method to run an SQL DESCRIBE query to show the table structure: >>>> DB::select('DESCRIBE listings;'); The output is quite verbose so I won't reproduce it here, but you should see an object with all your table details, confirming the migration worked. Seeding mock listings Now that we have a database table for our listings, let's seed it with the mock data. To do so we're going to have to do the following:  Load the database/data.json file  Parse the file  Insert the data into the listings table Creating a seeder Laravel includes a seeder class that we can extend called Seeder. Use this Artisan command to implement it: $ php artisan make:seeder ListingsTableSeeder When we run the seeder, any code in the run method is executed. database/ListingsTableSeeder.php: <?php use Illuminate\Database\Seeder; class ListingsTableSeeder extends Seeder { public function run() { // } } Loading the mock data Laravel provides a File facade that allows us to open files from disk as simply as File::get($path). To get the full path to our mock data file we can use the base_path() helper function, which returns the path to the root of our application directory as a string. It's then trivial to convert this JSON file to a PHP array using the built-in json_decode method. Once the data is an array, it can be directly inserted into the database given that the column names of the table are the same as the array keys. database/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); } Inserting the data In order to insert the data, we'll use the DB facade again. This time we'll call the table method, which returns an instance of Builder. The Builder class is a fluent query builder that allows us to query the database by chaining constraints, for example, DB::table(...)->where(...)->join(...) and so on. Let's use the insert method of the builder, which accepts an array of column names and values. database/seeds/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); DB::table('listings')->insert($data); } Executing the seeder To execute the seeder we must call it from the DatabaseSeeder.php file, which is in the same directory. database/seeds/DatabaseSeeder.php: <?php use Illuminate\Database\Seeder; class DatabaseSeeder extends Seeder { public function run() { $this->call(ListingsTableSeeder::class); } } With that done, we can use the Artisan CLI to execute the seeder: $ php artisan db:seed You should see the following output in your Terminal: Seeding: ListingsTableSeeder We'll again use Tinker to check our work. There are 30 listings in the mock data, so to confirm the seed was successful, let's check for 30 rows in the database: $ php artisan tinker >>>> DB::table('listings')->count(); # Output: 30 Finally, let's inspect the first row of the table just to be sure its content is what we expect: >>>> DB::table('listings')->get()->first(); Here is the output: => {#732 +"id": 1, +"title": "Central Downtown Apartment with Amenities", +"address": "No. 11, Song-Sho Road, Taipei City, Taiwan 105", +"about": "...", +"amenity_wifi": 1, +"amenity_pets_allowed": 1, +"amenity_tv": 1, +"amenity_kitchen": 1, +"amenity_breakfast": 1, +"amenity_laptop": 1, +"price_per_night": "$89", +"price_extra_people": "No charge", +"price_weekly_discount": "18%", +"price_monthly_discount": "50%" } If yours looks like that you're ready to move on! Listing model We've now successfully created a database table for our listings and seeded it with mock listing data. How do we access this data now from the Laravel app? We saw how the DB facade lets us execute queries on our database directly. But Laravel provides a more powerful way to access data via the Eloquent ORM. Eloquent ORM Object-Relational Mapping (ORM) is a technique for converting data between incompatible systems in object-oriented programming languages. Relational databases such as MySQL can only store scalar values such as integers and strings, organized within tables. We want to make use of rich objects in our app, though, so we need a means of robust conversion. Eloquent is the ORM implementation used in Laravel. It uses the active record design pattern, where a model is tied to a single database table, and an instance of the model is tied to a single row. To create a model in Laravel using Eloquent ORM, simply extend the Illuminate\Database\Eloquent\Model class using Artisan: $ php artisan make:model Listing This generates a new file. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { // } How do we tell the ORM what table to map to, and what columns to include? By default, the Model class uses the class name (Listing) in lowercase (listing) as the table name to use. And, by default, it uses all the fields from the table. Now, any time we want to load our listings we can use code such as this, anywhere in our app: <?php // Load all listings $listings = \App\Listing::all(); // Iterate listings, echo the address foreach ($listings as $listing) { echo $listing->address . '\n' ; } /* * Output: * * No. 11, Song-Sho Road, Taipei City, Taiwan 105 * 110, Taiwan, Taipei City, Xinyi District, Section 5, Xinyi Road, 7 * No. 51, Hanzhong Street, Wanhua District, Taipei City, Taiwan 108 * ... */ Casting The data types in a MySQL database don't completely match up to those in PHP. For example, how does an ORM know if a database value of 0 is meant to be the number 0, or the Boolean value of false? An Eloquent model can be given a $casts property to declare the data type of any specific attribute. $casts is an array of key/values where the key is the name of the attribute being cast, and the value is the data type we want to cast to. For the listings table, we will cast the amenities attributes as Booleans. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { protected $casts = [ 'amenity_wifi' => 'boolean', 'amenity_pets_allowed' => 'boolean', 'amenity_tv' => 'boolean', 'amenity_kitchen' => 'boolean', 'amenity_breakfast' => 'boolean', 'amenity_laptop' => 'boolean' ]; } Now these attributes will have the correct type, making our model more robust: echo  gettype($listing->amenity_wifi()); //  boolean Public interface The final piece of our web service is the public interface that will allow a client app to request the listing data. Since the Vuebnb listing page is designed to display one listing at a time, we'll at least need an endpoint to retrieve a single listing. Let's now create a route that will match any incoming GET requests to the URI /api/listing/{listing} where {listing} is an ID. We'll put this in the routes/api.php file, where routes are automatically given the /api/ prefix and have middleware optimized for use in a web service by default. We'll use a closure function to handle the route. The function will have a $listing argument, which we'll type hint as an instance of the Listing class, that is, our model. Laravel's service container will resolve this as an instance with the ID matching {listing}. We can then encode the model as JSON and return it as a response. routes/api.php: <?php use App\Listing; Route::get('listing/{listing}', function(Listing $listing) { return $listing->toJson(); }); We can test this works by using the curl command from the Terminal: $ curl http://vuebnb.test/api/listing/1 The response will be the listing with ID 1: Controller We'll be adding more routes to retrieve the listing data as the project progresses. It's a best practice to use a controller class for this functionality to keep a separation of concerns. Let's create one with Artisan CLI: $ php artisan make:controller ListingController We'll then move the functionality from the route into a new method, get_listing_api. app/Http/Controllers/ListingController.php: <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Listing; class ListingController extends Controller { public function get_listing_api(Listing $listing) { return $listing->toJson(); } } For the Route::get method we can pass a string as the second argument instead of a closure function. The string should be in the form [controller]@[method], for example, ListingController@get_listing_web. Laravel will correctly resolve this at runtime. routes/api.php: <?php Route::get('/listing/{listing}', 'ListingController@get_listing_api'); Images As stated at the beginning of the article, each mock listing comes with several images of the room. These images are not in the project code and must be copied from a parallel directory in the code base called images. Copy the contents of this directory into the public/images folder: $ cp -a ../images/. ./public/images Once you've copied these files, public/images will have 30 sub-folders, one for each mock listing. Each of these folders will contain exactly four main images and a thumbnail image: Accessing images Files in the public directory can be directly requested by appending their relative path to the site URL. For example, the default CSS file, public/css/app.css, can be requested at http://vuebnb.test/css/app.css. The advantage of using the public folder, and the reason we've put our images there, is to avoid having to create any logic for accessing them. A frontend app can then directly call the images in an img tag. You may think it's inefficient for our web server to serve images like this, and you'd be right. Let's try to open one of the mock listing images in our browser to test this thesis: http://vuebnb.test/images/1/Image_1.jpg: Image links The payload for each listing in the web service should include links to these new images so a client app knows where to find them. Let's add the image paths to our listing API payload so it looks like this: { "id": 1, "title": "...", "description": "...", ... "image_1": "http://vuebnb.test/app/image/1/Image_1.jpg", "image_2": "http://vuebnb.test/app/image/1/Image_2.jpg", "image_3": "http://vuebnb.test/app/image/1/Image_3.jpg", "image_4": "http://vuebnb.test/app/image/1/Image_4.jpg" } To implement this, we'll use our model's toArray method to make an array representation of the model. We'll then easily be able to add new fields. Each mock listing has exactly four images, numbered 1 to 4, so we can use a for loop and the asset helper to generate fully- qualified URLs to files in the public folder. We finish by creating an instance of the Response class by calling the response helper. We use the json; method and pass in our array of fields, returning the result. app/Http/Controllers/ListingController.php: public function get_listing_api(Listing $listing) { $model = $listing->toArray(); for($i = 1; $i <=4; $i++) { $model['image_' . $i] = asset( 'images/' . $listing->id . '/Image_' . $i . '.jpg' ); } return response()->json($model); } The /api/listing/{listing} endpoint is now ready for consumption by a client app. To summarize, we built a web service with Laravel to make the data publicly accessible. This involved setting up a database table using a migration and schema, then seeding the database with mock listing data. We then created a public interface for the web service using routes. You enjoyed an excerpt from a book written by Anthony Gore, titled Full-Stack Vue.js 2 and Laravel 5 which would help you bring the frontend and backend together with Vue, Vuex, and Laravel. Read More Testing RESTful Web Services with Postman How to develop RESTful web services in Spring        
Read more
  • 0
  • 0
  • 41654

article-image-uitableview-touch
Packt
14 Dec 2016
24 min read
Save for later

UITableView Touch Up

Packt
14 Dec 2016
24 min read
 In this article by Donny Wals, from the book Mastering iOS 10 Programming, we will go through UITableView touch up. Chances are that you have built a simple app before. If you have, there's a good probability that you have used UITableView. The UITableView is a core component in many applications. Virtually all applications that display a list of some sort make use of UITableView. Because UITableView is such an important component in the world of iOS I want you to dive in to it straight away. You may or may not have looked at UITableView before but that's okay. You'll be up to speed in no time and you'll learn how this component achieves that smooth 60 frames per seconds (fps) scrolling that users know and love. If your app can maintain a steady 60,fps your app will feel more responsive and scrolling will feel perfectly smooth to users which is exactly what you want. We'll also cover new UITableView features that make it even easier to optimize your table views. In addition to covering the basics of UITableView, you'll learn how to make use of Contacts.framework to build an application that shows a list of your users' contacts. This is similar to what the native Contacts app does on iOS. The contacts in the UITableView component will be rendered in a custom cell. You will learn how to create such a cell, using Auto Layout. Auto Layout is a technique that will be covered throughout this book because it's an important part of every iOS developer's tool belt. If you haven't used Auto Layout before, that's okay. This article will cover a few basics, and the layout is relatively simple, so you can gradually get used to it as we go. To sum it all up, this article covers: Configuring and displaying UITableView Fetching a user's contacts through Contacts.framework UITableView delegate and data source (For more resources related to this topic, see here.) Setting up the User Interface (UI) Every time you start a new project in Xcode, you have to pick a template for your application. These templates will provide you with a bit of boiler plate code or sometimes they will configure a very basic layout for you. Throughout this book, the starting point will always be the Single View Application template. This template will provide you with a bare minimum of boilerplate code. This will enable you to start from scratch every time, and it will boost your knowledge of how the Xcode-provided templates work internally. In this article, you'll create an app called HelloContacts. This is the app that will render your user's contacts in a UITableView. Create a new project by selecting File | New | Project. Select the Single View template, give your project a name (HelloContacts), and make sure you select Swift as the language for your project. You can uncheck all CoreData- and testing-related checkboxes; they aren't of interest right now. Your configuration should resemble the following screenshot: Once you have your app configured, open the Main.storyboard file. This is where you will create your UITableView and give it a layout. Storyboards are a great way for you to work on an app and have all of the screens in your app visualized at once. If you have worked with UITableView before, you may have used UITableViewController. This is a subclass of UIViewController that has a UITableView set as its view. It also abstracts away some of the more complex concepts of UITableView that we are interested in. So, for now, you will be working with the UIViewController subclass that holds UITableView and is configured manually. On the right hand side of the window, you'll find the Object Library. The Object Library is opened by clicking on the circular icon that contains a square in the bottom half of the sidebar.  In the Object Library, look for a UITableView. If you start typing the name of what you're looking for in the search box, it should automatically be filtered out for you. After you locate it, drag it into your app's view. Then, with UITableView selected, drag the white squares to all corners of the screen so that it covers your entire viewport. If you go to the dynamic viewport inspector at the bottom of the screen by clicking on the current device name as shown in the following screenshot and select a larger device such as an iPad or a smaller device such as the iPhone SE, you will notice that UITableView doesn't cover the viewport as nicely. On smaller screens, the UITableView will be larger than the viewport. On larger screens, UITableView simply doesn't stretch: This is why we will use Auto Layout. Auto Layout enables you to create layouts that will adapt to different viewports to make sure it looks good on all of the devices that are currently out there. For UITableView, you can pin the edges of the table to the edges of the superview, which is the view controller's main view. This will make sure the table stretches or shrinks to fill the screen at all times. Auto Layout uses constraints to describe layouts. UITableView should get some constraints that describe its relation to the edges of your view controller. The easiest way to add these constraints is to let Xcode handle it for you. To do this, switch the dynamic viewport inspector back to the view you initially selected. First, ensure UITableView properly covers the entire viewport, and then click on the Resolve Auto Layout Issues button at the bottom-right corner of the screen and select Reset to Suggested Constraints: This button automatically adds the required constraints for you. The added constraints will ensure that each edge of UITableView sticks to the corresponding edge of its superview. You can manually inspect these constraints in the Document Outline on the left-hand side of the Interface Builder window. Make sure that everything works by changing the preview device in the dynamic viewport inspector again. You should verify that no matter which device you choose now, the table will stretch or shrink to cover the entire view at all times. Now that you have set up your project with UITableView added to the interface and the constraints have been added, it's time to start writing some code. The next step is to use Contacts.framework to fetch some contacts from your user's address book. Fetching a user's contacts In the introduction for this article, it was mentioned that we would use Contacts.framework to fetch a user's contacts list and display it in UITableView. Before we get started, we need to be sure we have access to the user's address book. In iOS 10, privacy is a bit more restrictive than it was earlier. If you want to have access to a user's contacts, you need to specify this in your application's Info.plist file. If you fail to specify the correct keys for the data your application uses, it will crash without warning. So, before attempting to load your user's contacts, you should take care of adding the correct key to Info.plist. To add the key, open Info.plist from the list of files in the Project Navigator on the left and hover over Information Property List at the top of the file. A plus icon should appear, which will add an empty key with a search box when you click on it. If you start typing Privacy – contacts, Xcode will filter options until there is just one left, that is, the key for contact access. In the value column, fill in a short description about what you are going to use this access for. In our app, something like read contacts to display them in a list should be sufficient. Whenever you need access to photos, Bluetooth, camera, microphone, and more, make sure you check whether your app needs to specify this in its Info.plist. If you fail to specify a key that's required, your app will crash and will not make it past Apple's review process. Now that you have configured your app to specify that it wants to be able to access contact data, let's get down to writing some code. Before reading the contacts, you'll need to make sure the user has given permission to access contacts. You'll have to check this first, after which the code should either fetch contacts or it should ask the user for permission to access the contacts. Add the following code to ViewController.swift. After doing so, we'll go over what this code does and how it works: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() let store = CNContactStore() if CNContactStore.authorizationStatus(for: .contacts) == .notDetermined { store.requestAccess(for: .contacts, completionHandler: {[weak self] authorized, error in if authorized { self?.retrieveContacts(fromStore: store) } }) } else if CNContactStore.authorizationStatus(for: .contacts) == .authorized { retrieveContacts(fromStore: store) } } func retrieveContacts(fromStore store: CNContactStore) { let keysToFetch = [CNContactGivenNameKey as CNKeyDescriptor, CNContactFamilyNameKey as CNKeyDescriptor, CNContactImageDataKey as CNKeyDescriptor, CNContactImageDataAvailableKey as CNKeyDescriptor] let containerId = store.defaultContainerIdentifier() let predicate = CNContact.predicateForContactsInContainer(withIdentifier: containerId) let contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) print(contacts) } } In the viewDidLoad method, we will get an instance of CNContactStore. This is the object that will access the user's contacts database to fetch the results you're looking for. Before you can access the contacts, you need to make sure that the user has given your app permission to do so. First, check whether the current authorizationStatus is equal to .notDetermined. This means that we haven't asked permission yet and it's a great time to do so. When asking for permission, we pass a completionHandler. This handler is called a closure. It's basically a function without a name that gets called when the user has responded to the permission request. If your app is properly authorized after asking permission, the retrieveContacts method is called to actually perform the retrieval. If the app already had permission, we'll call retrieveContacts right away. Completion handlers are found throughout the Foundation and UIKit frameworks. You typically pass them to methods that perform a task that could take a while and is performed parallel to the rest of your application so the user interface can continue running without waiting for the result. A simplified implementation of such a function could look like this: func doSomething(completionHandler: Int -> Void) { // perform some actions
 var resultingValue = theResultOfSomeAction() completionHandler(resultingValue)
 } You'll notice that actually calling completionHandler looks identical to calling an ordinary function or method. The idea of such a completion handler is that we can specify a block of code, a closure, that is supposed to be executed at a later time. For example, after performing a task that is potentially slow. You'll find plenty of other examples of callback handlers and closures throughout this book as it's a common pattern in programming. The retrieveContacts method in ViewController.swift is responsible for actually fetching the contacts and is called with a parameter named store. It's set up like this so we don't have to create a new store instance since we already created one in viewDidLoad. When fetching a list of contacts, you use a predicate. We won't go into too much detail on predicates and how they work yet. The main goal of the predicate is to establish a condition to filter the contacts database on. In addition to a predicate, you also provide a list of keys your code wants to fetch. These keys represent properties that a contact object can have. They represent data, such as e-mail, phone numbers, names, and more. In this example, you only need to fetch a contact's given name, family name, and contact image. To make sure the contact image is available at all, there's a key request for that as well. When everything is configured, a call is made to unifiedContacts(matching:, keysToFetch:). This method call can throw an error, and since we're currently not interested in the error, try! is used to tell the compiler that we want to pretend the call can't fail and if it does, the application should crash. When you're building your own app, you might want to wrap this call in do {} catch {} block to make sure that your app doesn't crash if errors occur. If you run your app now, you'll see that you're immediately asked for permission to access contacts. If you allow this, you will see a list of contacts printed in the console. Next, let's display some content information in the contacts table! Creating a custom UITableViewCell for our contacts To display contacts in your UITableView, you will need to set up a few more things. First and foremost, you'll need to create a UITableViewCell that displays contact information. To do this, you'll create a custom UITableViewCell by creating a subclass. The design for this cell will be created in Interface Builder, so you're going to add @IBOutlets in your UITableViewCell subclass. These @IBOutlets are the connection points between Interface Builder and your code. Designing the contact cell The first thing you need to do is drag a UITableViewCell out from the Object Library and drop it on top of UITableView. This will add the cell as a prototype. Next, drag out UILabel and a UIImageView from the Object Library to the newly added UITableViewCell, and arrange them as they are arranged in the following figure. After you've done this, select both UILabel and UIImage and use the Reset to Suggested Constraints option you used earlier to lay out UITableView. If you have both the views selected, you should also be able to see the blue lines that are visible in following screenshot: These blue lines represent the constraints that were added to lay out your label and image. You can see a constraint that offsets the label from the left side of the cell. However, there is also a constraint that spaces the label and the image. The horizontal line through the middle of the cell is a constraint that vertically centers the label and image inside of the cell. You can inspect these constraints in detail in the Document Outline on the right. Now that our cell is designed, it's time to create a custom subclass for it and hook up @IBOutlets. Creating the cell subclass To get started, create a new file (File | New | File…) and select a Cocoa Touch file. Name the file ContactTableViewCell and make sure it subclasses UITableViewCell, as shown in the following screenshot: When you open the newly created file, you'll see two methods already added to the template for ContactTableViewCell.swift: awakeFromNib and setSelected(_:animated:). The awakeFromNib method is called the very first time this cell is created; you can use this method to do some initial setup that's required to be executed only once for your cell. The other method is used to customize your cell when a user taps on it. You could, for instance, change the background color or text color or even perform an animation. For now, you can delete both of these methods and replace the contents of this class with the following code: @IBOutlet var nameLabel: UILabel! @IBOutlet var contactImage: UIImageView! The preceding code should be the entire body of the ContactTableViewCell class. It creates two @IBOutlets that will allow you to connect your prototype cell with so that you can use them in your code to configure the contact's name and image later. In the Main.storyboard file, you should select your cell, and in the Identity Inspector on the right, set its class to ContactTableViewCell (as shown in the following screenshot). This will make sure that Interface Builder knows which class it should use for this cell, and it will make the @IBOutlets available to Interface Builder. Now that our cell has the correct class, select the label that will hold the contact's name in your prototype cell and click on Connections Inspector. Then, drag a new referencing outlet from the Connections Inspector to your cell and select nameLabel to connect the UILabel in the prototype cell to @IBOutlet in the code (refer to the following screenshot). Perform the same steps for UIImageView and select the contactImage option instead of nameLabel. The last thing we need to do is provide a reuse identifier for our cell. Click on Attributes Inspector after selecting your cell. In Attributes Inspector, you will find an input field for the Identifier. Set this input field to ContactTableViewCell. The reuse identifier is the identifier that is used to inform the UITableView about the cell it should retrieve when it needs to be created. Since the custom UITableViewCell is all set up now, we need to make sure UITableView is aware of the fact that our ViewController class will be providing it with the required data and cells. Displaying the list of contacts When you're implementing UITableView, it's good to be aware of the fact that you're actually working with a fairly complex component. This is why we didn't pick a UITableViewController at the beginning of this article. UITableViewController does a pretty good job of hiding the complexities of UITableView from thedevelopers. The point of this article isn't just to display a list of contacts; it's purpose is also to introduce some advanced concepts about a construct that you might have seen before, but never have been aware of. Protocols and delegation  Throughout the iOS SDK and the Foundation framework the delegate design pattern is used. Delegation provides a way for objects to have some other object handle tasks on their behalf. This allows great decoupling of certain tasks and provides a powerful way to allow communication between objects. The following image visualizes the delegation pattern for a UITableView component and its UITableViewDataSource: The UITableView uses two objects that help in the process of rendering a list. One is called the delegate, the other is called the data source. When you use a UITableView, you need to explicitly  configure the data source and delegate properties. At runtime, the UITableView will call methods on its delegate and data source in order to obtain information about cells, handle interactions and more. If you look at the documentation for the UITableView delegate property it will tell you that its type is UITableViewDelegate?. This means that the delegate's type is UITableViewDelegate. The question mark indicates that this value could be nil; we call this an Optional. The reason for the delegate to be Optional is that it might not ever be set at all. Diving deeper into what this UITableViewDelegate is exactly, you'll learn that it's actually a protocol and not a class or struct. A protocol provides a set of properties and/or methods that any object that conforms to (or adopts) this protocol must implement. Sometimes a protocol will provide optional methods, such as UITableViewDelegate does. If this is the case, we can choose which delegate methods we want to implement and which method we want to omit. Other protocols mandatory methods. The UITableViewDataSource has a couple of mandatory methods to ensure that a data source is able to provide UITableView with the minimum amount of information needed in order to render the cells you want to display. If you've never heard of delegation and protocols before, you might feel like this is all a bit foreign and complex. That's okay; throughout this book you'll gain a deeper understanding of protocols and how they work. In particular, the next section, where we'll cover swift and protocol-oriented programming, should provide you with a very thorough overview of what protocols are and how they work. For now, it's important to be aware that a UITableView always asks another object for data through the UITableViewDataSource protocol and their interactions are handled though the UITableViewDelegate. If you were to look at what UITableView does when it's rendering contents it could be dissected like this: UITableView needs to reload the data. UITableView checks whether it has a delegate; it asks the dataSource for the number of sections in this table. Once the delegate responds with the number of sections, the table view will figure out how many cells are required for each section. This is done by asking the dataSource for the number of cells in each section. Now that the cell knows the amount of content it needs to render, it will ask its data source for the cells that it should display. Once the data source provides the required cells based on the number of contacts, the UITableView will request display the cells one by one. This process is a good example of how UITableView uses other objects to provide data on its behalf. Now that you know how the delegation works for UITableView, it's about time you start implementing this in your own app. Conforming to the UITableViewDataSource and UITableViewDelegate protocol In order to specify the UITableView's delegate and data source, the first thing you need to do is to create an @IBOutlet for your UITableView and connect it to ViewController.swift. Add the following line to your ViewController, above the viewDidLoad method. @IBOutlet var tableView: UITableView! Now, using the same technique as before when designing UITableViewCell, select the UITableView in your Main.storyboard file and use the Connections Inspector to drag a new outlet reference to the UITableView. Make sure you select the tableView property and that's it. You've now hooked up your UITableView to the ViewController code. To make the ViewController code both the data source and the delegate for UITableView, it will have to conform to the UITableViewDataSource and UITableViewDelegate protocols. To do this, you have to add the protocols you want to conform to your class definition. The protocols are added, separated by commas, after the superclass. When you add the protocols to the ViewController, it should look like this: class ViewController: UIViewController, UITableViewDataSource, UITableViewDelegate { // class body
 } Once you have done this, you will have an error in your code. That's because even though your class definition claim to implement these protocols, you haven't actually implemented the required functionality yet. If you look at the errors Xcode is giving you, it becomes clear that there's two methods you must implement. These methods are tableView(_:numberOfRowsInSection:) and tableView(_:cellForRowAt:). So let's fix the errors by adjusting our code a little bit in order to conform to the protocols. This is also a great time to refactor the contacts fetching a little bit. You'll want to access the contacts in multiple places so that the list should become an instance variable. Also, if you're going to create cells anyway, you might as well configure them to display the correct information right away. To do so, perform the following code: var contacts = [CNContact]() // … viewDidLoad // … retrieveContacts func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return contacts.count } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "contactCell") as! ContactTableViewCell let contact = contacts[indexPath.row] cell.nameLabel.text = "(contact.givenName) (contact.familyName)" if let imageData = contact.imageData where contact.imageDataAvailable { cell.contactImage.image = UIImage(data: imageData) } return cell }  The preceding code is what's needed to conform to the UITableViewDataSource protocol. Right below the @IBOutlet of your UITableView, a variable is declared that will hold the list of contacts. The following code snippet was also added to the ViewController: func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return contacts.count }  This method is called by the UITableView to determine how many cells it will have to render. This method just returns the total number of contacts that's in the contacts list. You'll notice that there's a section parameter passed to this method. That's because a UITableView can contain multiple sections. The contacts list only has a single section; if you have data that contains multiple sections, you should also implement the numberOfSections(in:) method. The second method we added was: func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: "contactCell") as! ContactTableViewCell let contact = contacts[indexPath.row] cell.nameLabel.text = "(contact.givenName) (contact.familyName)" if let imageData = contact.imageData where contact.imageDataAvailable { cell.contactImage.image = UIImage(data: imageData) } return cell }  This method is used to get an appropriate cell for our UITableView to display. This is done by calling dequeueReusableCell(withIdentifier:) on the UITableView instance that's passed to this method. This is because UITableView can reuse cells that are currently off screen. This is a performance optimization that allows UITableView to display vast amounts of data without becoming slow or consuming big chunks of memory. The return type of dequeueReusableCell(withIdentifier:) is UITableViewCell, and our custom outlets are not available on this class. This is why we force cast the result from that method to ContactTableViewCell. Force casting to your own subclass will make sure that the rest of your code has access to your nameLabel and contactImage. Casting objects will convert an object from one class or struct to another. This usually only works correctly when you're casting from a superclass to a subclass like we're doing in our example. Casting can fail, so force casting is dangerous and should only be done if you want your app to crash or consider it a programming error in case the cast fails. We also grab a contact from the contacts array that corresponds to the current row of indexPath. This contact is then used to assign all the correct values to the cell and then the cell is returned. This is all the setup needed to make your UITableView display the cells. Yet, if we build and run our app, it doesn't work! A few more changes will have to be made for it to do so. Currently, the retrieveContacts method does fetch the contacts for your user, but it doesn't update the contacts variable in ViewController. Also, the UITableView won't know that it needs to reload its data unless it's told to. Currently, the last few lines of retrieveContacts will look like this: let contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) print(contacts) Let's update these lines to the following code: contacts = try! store.unifiedContacts(matching: predicate, keysToFetch: keysToFetch) tableView.reloadData() Now, the result of fetching contacts is assigned to the instance variable that's declared at the top of your ViewController. After doing that, we tell the tableView to reload its data, so it will go through the delegate methods that provide the cell count and cells again. Lastly, the UITableView doesn't know that the ViewControler instance will act as both the dataSource and the delegate. So, you should update the viewDidLoad method to assign the UITableView's delegate and dataSource properties. Add the following lines to the end of the viewDidLoad method: tableView.dataSource = self tableView.delegate = self If you build and run it now, your app works! If you're running it in the simulator or you haven't assigned images to your contacts, you won't see any images. If you'd like to assign some images to the contacts in the simulator, you can drag your images into the simulator to add them to the simulator's photo library. From there, you can add pictures to contacts just as you would on a real device. However, if you have assigned images to some of your contacts you will see their images appear. You can now scroll through all of your contacts, but there seems to be an issue. When you're scrolling down your contacts list, you might suddenly see somebody else's photo next to the name of a contact that has no picture! This is actually a performance optimization. Summary Your contacts app is complete for now. We've already covered a lot of ground on the way to iOS mastery. We started by creating a UIViewController that contains a UITableView. We used Auto Layout to pin the edges of the UITableView to the edges of main view of ViewController. We also explored the Contacts.framework and understood how to set up our app so it can access the user's contact data. Resources for Article:  Further resources on this subject: Offloading work from the UI Thread on Android [article] Why we need Design Patterns? [article] Planning and Structuring Your Test-Driven iOS App [article]
Read more
  • 0
  • 0
  • 41541
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-mobile-forensics-and-its-challanges
Packt
25 Apr 2016
10 min read
Save for later

Mobile Forensics and Its Challanges

Packt
25 Apr 2016
10 min read
In this article by Heather Mahalik and Rohit Tamma, authors of the book Practical Mobile Forensics, Second Edition, we will cover the following topics: Introduction to mobile forensics Challenges in mobile forensics (For more resources related to this topic, see here.) Why do we need mobile forensics? In 2015, there were more than 7 billion mobile cellular subscriptions worldwide, up from less than 1 billion in 2000, says International Telecommunication Union (ITU). The world is witnessing technology and user migration from desktops to mobile phones. The following figure sourced from statista.com shows the actual and estimated growth of smartphones from the year 2009 to 2018. Growth of smartphones from 2009 to 2018 in million units Gartner Inc. reports that global mobile data traffic reached 52 million terabytes (TB) in 2015, an increase of 59 percent from 2014, and the rapid growth is set to continue through 2018, when mobile data levels are estimated to reach 173 million TB. Smartphones of today, such as the Apple iPhone, Samsung Galaxy series, and BlackBerry phones, are compact forms of computers with high performance, huge storage, and enhanced functionalities. Mobile phones are the most personal electronic device that a user accesses. They are used to perform simple communication tasks, such as calling and texting, while still providing support for Internet browsing, e-mail, taking photos and videos, creating and storing documents, identifying locations with GPS services, and managing business tasks. As new features and applications are incorporated into mobile phones, the amount of information stored on the devices is continuously growing. Mobiles phones become portable data carriers, and they keep track of all your moves. With the increasing prevalence of mobile phones in peoples' daily lives and in crime, data acquired from phones become an invaluable source of evidence for investigations relating to criminal, civil, and even high-profile cases. It is rare to conduct a digital forensic investigation that does not include a phone. Mobile device call logs and GPS data were used to help solve the attempted bombing in Times Square, New York, in 2010. The details of the case can be found at http://www.forensicon.com/forensics-blotter/cell-phone-email-forensics-investigation-cracks-nyc-times-square-car-bombing-case/. The science behind recovering digital evidence from mobile phones is called mobile forensics. Digital evidence is defined as information and data that is stored on, received, or transmitted by an electronic device that is used for investigations. Digital evidence encompasses any and all digital data that can be used as evidence in a case. Mobile forensics Digital forensics is a branch of forensic science focusing on the recovery and investigation of raw data residing in electronic or digital devices. The goal of the process is to extract and recover any information from a digital device without altering the data present on the device. Over the years, digital forensics grew along with the rapid growth of computers and various other digital devices. There are various branches of digital forensics based on the type of digital device involved such as computer forensics, network forensics, mobile forensics, and so on. Mobile forensics is a branch of digital forensics related to the recovery of digital evidence from mobile devices. Forensically sound is a term used extensively in the digital forensics community to qualify and justify the use of particular forensic technology or methodology. The main principle for a sound forensic examination of digital evidence is that the original evidence must not be modified. This is extremely difficult with mobile devices. Some forensic tools require a communication vector with the mobile device, thus a standard write protection will not work during forensic acquisition. Other forensic acquisition methods may involve removing a chip or installing a bootloader on the mobile device prior to extract data for forensic examination. In cases where the examination or data acquisition is not possible without changing the configuration of the device, the procedure and the changes must be tested, validated, and documented. Following proper methodology and guidelines is crucial in examining mobile devices as it yields the most valuable data. As with any evidence gathering, not following the proper procedure during the examination can result in loss or damage of evidence or render it inadmissible in court. The mobile forensics process is broken into three main categories: seizure, acquisition, and examination/analysis. Forensic examiners face some challenges while seizing the mobile device as a source of evidence. At the crime scene, if the mobile device is found switched off, the examiner should place the device in a faraday bag to prevent changes should the device automatically power on. As shown in the following figure, Faraday bags are specifically designed to isolate the phone from the network. A Faraday bag (Image courtesy: http://www.amazon.com/Black-Hole-Faraday-Bag-Isolation/dp/B0091WILY0) If the phone is found switched on, switching it off has a lot of concerns attached to it. If the phone is locked by a PIN or password or encrypted, the examiner will be required to bypass the lock or determine the PIN to access the device. Mobile phones are networked devices and can send and receive data through different sources, such as telecommunication systems, Wi-Fi access points, and Bluetooth. So, if the phone is in a running state, a criminal can securely erase the data stored on the phone by executing a remote wipe command. When a phone is switched on, it should be placed in a faraday bag. If possible, prior to placing the mobile device in the faraday bag, disconnect it from the network to protect the evidence by enabling the flight mode and disabling all network connections (Wi-Fi, GPS, Hotspots, and so on). This will also preserve the battery, which will drain while in a faraday bag and protect against leaks in the faraday bag. Once the mobile device is seized properly, the examiner may need several forensic tools to acquire and analyze the data stored on the phone. Mobile phones are dynamic systems that present a lot of challenges to the examiner in extracting and analyzing digital evidence. The rapid increase in the number of different kinds of mobile phones from different manufacturers makes it difficult to develop a single process or tool to examine all types of devices. Mobile phones are continuously evolving as existing technologies progress and new technologies are introduced. Furthermore, each mobile is designed with a variety of embedded operating systems. Hence, special knowledge and skills are required from forensic experts to acquire and analyze the devices. Challenges in mobile forensics One of the biggest forensic challenges when it comes to the mobile platform is the fact that data can be accessed, stored, and synchronized across multiple devices. As the data is volatile and can be quickly transformed or deleted remotely, more effort is required for the preservation of this data. Mobile forensics is different from computer forensics and presents unique challenges to forensic examiners. Law enforcement and forensic examiners often struggle to obtain digital evidence from mobile devices. The following are some of the reasons: Hardware differences: The market is flooded with different models of mobile phones from different manufacturers. Forensic examiners may come across different types of mobile models, which differ in size, hardware, features, and operating system. Also, with a short product development cycle, new models emerge very frequently. As the mobile landscape is changing each passing day, it is critical for the examiner to adapt to all the challenges and remain updated on mobile device forensic techniques across various devices. Mobile operating systems: Unlike personal computers where Windows has dominated the market for years, mobile devices widely use more operating systems, including Apple's iOS, Google's Android, RIM's BlackBerry OS, Microsoft's Windows Mobile, HP's webOS, Nokia's Symbian OS, and many others. Even within these operating systems, there are several versions which make the task of forensic investigator even more difficult. Mobile platform security features: Modern mobile platforms contain built-in security features to protect user data and privacy. These features act as a hurdle during the forensic acquisition and examination. For example, modern mobile devices come with default encryption mechanisms from the hardware layer to the software layer. The examiner might need to break through these encryption mechanisms to extract data from the devices. Lack of resources: As mentioned earlier, with the growing number of mobile phones, the tools required by a forensic examiner would also increase. Forensic acquisition accessories, such as USB cables, batteries, and chargers for different mobile phones, have to be maintained in order to acquire those devices. Preventing data modification: One of the fundamental rules in forensics is to make sure that data on the device is not modified. In other words, any attempt to extract data from the device should not alter the data present on that device. But this is practically not possible with mobiles because just switching on a device can change the data on that device. Even if a device appears to be in an off state, background processes may still run. For example, in most mobiles, the alarm clock still works even when the phone is switched off. A sudden transition from one state to another may result in the loss or modification of data. Anti-forensic techniques: Anti-forensic techniques, such as data hiding, data obfuscation, data forgery, and secure wiping, make investigations on digital media more difficult. Dynamic nature of evidence: Digital evidence may be easily altered either intentionally or unintentionally. For example, browsing an application on the phone might alter the data stored by that application on the device. Accidental reset: Mobile phones provide features to reset everything. Resetting the device accidentally while examining may result in the loss of data. Device alteration: The possible ways to alter devices may range from moving application data, renaming files, and modifying the manufacturer's operating system. In this case, the expertise of the suspect should be taken into account. Passcode recovery: If the device is protected with a passcode, the forensic examiner needs to gain access to the device without damaging the data on the device. While there are techniques to bypass the screen lock, they may not work always on all the versions. Communication shielding: Mobile devices communicate over cellular networks, Wi-Fi networks, Bluetooth, and Infrared. As device communication might alter the device data, the possibility of further communication should be eliminated after seizing the device. Lack of availability of tools: There is a wide range of mobile devices. A single tool may not support all the devices or perform all the necessary functions, so a combination of tools needs to be used. Choosing the right tool for a particular phone might be difficult. Malicious programs: The device might contain malicious software or malware, such as a virus or a Trojan. Such malicious programs may attempt to spread over other devices over either a wired interface or a wireless one. Legal issues: Mobile devices might be involved in crimes, which can cross geographical boundaries. In order to tackle these multijurisdictional issues, the forensic examiner should be aware of the nature of the crime and the regional laws. Summary Mobile devices store a wide range of information such as SMS, call logs, browser history, chat messages, location details, and so on. Mobile device forensics includes many approaches and concepts that fall outside of the boundaries of traditional digital forensics. Extreme care should be taken while handling the device right from evidence intake phase to archiving phase. Examiners responsible for mobile devices must understand the different acquisition methods and the complexities of handling the data during analysis. Extracting data from a mobile device is half the battle. The operating system, security features, and type of smartphone will determine the amount of access you have to the data. It is important to follow sound forensic practices and make sure that the evidence is unaltered during the investigation. Resources for Article: Further resources on this subject: Forensics Recovery [article] Mobile Phone Forensics – A First Step into Android Forensics [article] Mobility [article]
Read more
  • 0
  • 0
  • 41523

article-image-python-founder-guido-van-rossum-goes-on-a-permanent-vacation-from-being-bdfl
Savia Lobo
13 Jul 2018
5 min read
Save for later

Python founder resigns - Guido van Rossum goes ‘on a permanent vacation from being BDFL’

Savia Lobo
13 Jul 2018
5 min read
Python is one of the most popular scripting languages widely adopted and loved due to its simplicity.  Since its humble beginnings in the last 80s as an interpreter for the new, a simple-to-read scripting language, it has now come to dominate all of the tech world. Python has become a vital part of web development stacks such as Perl, PHP, and others have been core to domains like security. It is also used in current popular technologies such as AI, ML, and DL. After 28 years of successfully stewarding the Python community since inventing it back in Dec 1989, Guido van Rossum has decided to take himself out of the decision making process of the community as a Benevolent dictator for life (BDFL). Guido still promises to be a part of the core development group. He also added that he will be available to mentor people but most of the times the community will have to manage on their own. Benevolent dictator for life (BDFL) is a term that Guido's fellow Python enthusiasts came up with for him, as a joke, when discussing minutes of the meeting over email regarding leading Python’s development and adoption. Who will look after the Python community now? Guido Van Rossum said, "I am not going to appoint a successor". True to his leadership style, he has thrown his team of core developers into the deep end by asking them to consider what the Python community's new governance model could be. In his memo, he asked, "So what are you all going to do? Create a democracy? Anarchy? A dictatorship? A federation?" Guido's parting advice to the core dev team Guido expressed confidence in his team to continue to manage the day-to-day tasks and operations just as they’ve been doing under his leadership. The two things he wants the core developers and the community to think deeply about are: How the PEPs are decided and How will the new core developers be inducted? He also emphasized the importance of fostering the right community culture militantly through Python's Community Code of Conduct (CoC). He said, "if you don't like that document your only option might be to leave this group voluntarily. Perhaps there are issues to decide like when should someone be kicked out (this could be banning people from python-dev or python-ideas too since those are also covered by the CoC)." He assured the team that while he has stepped down as the BDFL and from all decision-making duties, he will continue to be an active member of the community and will now be more available as a mentor to those on the core development team. Guido's decision to quit seems to have stemmed partly from the physical, mental, and emotional toll that the role has taken on him over years. He concluded his thread on Transfer of Power by saying, "I'm tired, and need a very long break". How is the Python community taking this decision? The development team hopes Guido will make a come back after his well-deserved break. As a BDFL, Guido has provided them with consistency in design and taste. By having Guido as a monitor, the team has had a very consistent view of how the community should behave and this has been an asset for the whole team. Now they have four ways to explore to govern the Python community Find a new BDFL. This option seems highly unlikely as Guido’s legacy is irreplaceable. Besides, it is practically the least robust to rely on one person to take all key decisions and to commit their full time to the community. That person also needs to be well respected and accepted as a de facto head. Set up an N-virate leadership team (a group of 3 (triumvirate) or 5 (quintumvirate) experts). With such a model, the responsibilities and load will be equally distributed among the chosen members from the core development team. This appears to be the current favorite on the thread that opened yesterday. Become a democracy. In this model, the community gets to vote on all key decisions. This seems like the short-term fix the team is gravitating towards. At least to decide on the immediate task at hand. But many on the team acknowledge that this is not a permanent answer as it will pull the language in too many directions and also is time-consuming. Explore the governance model of other open source communities. This option is as being seriously considered in the discussions. Clearly, the community loves Guido, evident from the deluge of well wishes he's receiving from all over the globe. You know you've done your job well when you hear someone say 'You changed my life'. Guido has changed millions of lives for the better. https://twitter.com/AndrewYNg/status/1017664116482162689 https://twitter.com/anthonypjshaw/status/1017610576640393216 https://twitter.com/generativist/status/1017547690228396032 https://twitter.com/bloodyquantum/status/1017558674024218624 Thank you, Guido, for Python, your heart, and your leadership. We know the community will thrive even in your absence because you've cultivated an excellent culture and a great set of minds. Top 7 Python programming books you need to read Python web development: Django vs Flask in 2018 Python experts talk Python on Twitter: Q&A Recap
Read more
  • 0
  • 0
  • 41520

article-image-building-your-first-odoo-application
Packt
02 Jan 2017
22 min read
Save for later

Building Your First Odoo Application

Packt
02 Jan 2017
22 min read
In this article by, Daniel Reis, the author of the book Odoo 10 Development Essentials, we will create our first Odoo application and learn the steps needed make it available to Odoo and install it. (For more resources related to this topic, see here.) Inspired by the notable http://todomvc.com/ project, we will build a simple To-Do application. It should allow us to add new tasks, mark them as completed, and finally clear the task list from all the already completed tasks. Understanding applications and modules It's common to hear about Odoo modules and applications. But what exactly is the difference between them? Module add-ons are building blocks for Odoo applications. A module can add new features to Odoo, or modify existing ones. It is a directory containing a manifest, or descriptor file, named __manifest__.py, plus the remaining files that implement its features. Applications are the way major features are added to Odoo. They provide the core elements for a functional area, such as Accounting or HR, based on which additional add-on modules modify or extend features. Because of this, they are highlighted in the Odoo Apps menu. If your module is complex, and adds new or major functionality to Odoo, you might consider creating it as an application. If you module just makes changes to existing functionality in Odoo, it is likely not an application. Whether a module is an application or not is defined in the manifest. Technically is does not have any particular effect on how the add-on module behaves. It is only used for highlight on the Apps list. Creating the module basic skeleton We should have the Odoo server at ~/odoo-dev/odoo/. To keep things tidy, we will create a new directory alongside it to host our custom modules, at ~/odoo-dev/custom-addons. Odoo includes a scaffold command to automatically create a new module directory, with a basic structure already in place. You can learn more about it with: $ ~/odoo-dev/odoo/odoo-bin scaffold --help You might want to keep this in mind when you start working your next module, but we won't be using it right now, since we will prefer to manually create all the structure for our module. An Odoo add-on module is a directory containing a __manifest__.py descriptor file. In previous versions, this descriptor file was named __openerp__.py. This name is still supported, but is deprecated. It also needs to be Python-importable, so it must also have an __init__.py file. The module's directory name is its technical name. We will use todo_app for it. The technical name must be a valid Python identifier: it should begin with a letter and can only contain letters, numbers, and the underscore character. The following commands create the module directory and create an empty __init__.py file in it, ~/odoo-dev/custom-addons/todo_app/__init__.py. In case you would like to do that directly from the command line, this is what you would use: $ mkdir ~/odoo-dev/custom-addons/todo_app $ touch ~/odoo-dev/custom-addons/todo_app/__init__.py Next, we need to create the descriptor file. It should contain only a Python dictionary with about a dozen possible attributes; of this, only the name attribute is required. A longer description attribute and the author attribute also have some visibility and are advised. We should now add a __manifest__.py file alongside the __init__.py file with the following content: { 'name': 'To-Do Application', 'description': 'Manage your personal To-Do tasks.', 'author': 'Daniel Reis', 'depends': ['base'], 'application': True, } The depends attribute can have a list of other modules that are required. Odoo will have them automatically installed when this module is installed. It's not a mandatory attribute, but it's advised to always have it. If no particular dependencies are needed, we should depend on the core base module. You should be careful to ensure all dependencies are explicitly set here; otherwise, the module may fail to install in a clean database (due to missing dependencies) or have loading errors, if by chance the other required modules are loaded afterwards. For our application, we don't need any specific dependencies, so we depend on the base module only. To be concise, we chose to use very few descriptor keys, but in a real word scenario, we recommend that you also use the additional keys since they are relevant for the Odoo apps store: summary: This is displayed as a subtitle for the module. version: By default, is 1.0. It should follow semantic versioning rules (see http://semver.org/ for details). license: By default, is LGPL-3. website: This is a URL to find more information about the module. This can help people find more documentation or the issue tracker to file bugs and suggestions. category: This is the functional category of the module, which defaults to Uncategorized. The list of existing categories can be found in the security groups form (Settings | User | Groups), in the Application field drop-down list. These other descriptor keys are also available: installable: It is by default True but can be set to False to disable a module. auto_install: If the auto_install module is set to True, this module will be automatically installed, provided all its dependencies are already installed. It is used for the Glue modules. Since Odoo 8.0, instead of the description key, we can use a README.rst or README.md file in the module's top directory. A word about licenses Choosing a license for your work is very important, and you should consider carefully what is the best choice for you, and its implications. The most used licenses for Odoo modules are the GNU Lesser General Public License (LGLP) and the Affero General Public License (AGPL). The LGPL is more permissive and allows commercial derivate work, without the need to share the corresponding source code. The AGPL is a stronger open source license, and requires derivate work and service hosting to share their source code. Learn more about the GNU licenses at https://www.gnu.org/licenses/. Adding to the add-ons path Now that we have a minimalistic new module, we want to make it available to the Odoo instance. For that, we need to make sure the directory containing the module is in the add-ons path, and then update the Odoo module list. We will position in our work directory and start the server with the appropriate add-ons path configuration: $ cd ~/odoo-dev $ ./odoo/odoo-bin -d todo --addons-path="custom-addons,odoo/addons" --save The --save option saves the options you used in a config file. This spares us from repeating them every time we restart the server: just run ./odoo-bin and the last saved options will be used. Look closely at the server log. It should have an INFO ? odoo: addons paths:[...] line. It should include our custom-addons directory. Remember to also include any other add-ons directories you might be using. For instance, if you also have a ~/odoo-dev/extra directory containing additional modules to be used, you might want to include them also using the option: --addons-path="custom-addons,extra,odoo/addons" Now we need the Odoo instance to acknowledge the new module we just added. Installing the new module In the Apps top menu, select the Update Apps List option. This will update the module list, adding any modules that may have been added since the last update to the list. Remember that we need the developer mode enabled for this option to be visible. That is done in the Settings dashboard, in the link at the bottom right, below the Odoo version number information . Make sure your web client session is working with the right database. You can check that at the top right: the database name is shown in parenthesis, right after the user name. A way to enforce using the correct database is to start the server instance with the additional option --db-filter=^MYDB$. The Apps option shows us the list of available modules. By default it shows only application modules. Since we created an application module we don't need to remove that filter to see it. Type todo in the search and you should see our new module, ready to be installed. Now click on the module's Install button and we're ready! The Model layer Now that Odoo knows about our new module, let's start by adding a simple model to it. Models describe business objects, such as an opportunity, sales order, or partner (customer, supplier, and so on.). A model has a list of attributes and can also define its specific business. Models are implemented using a Python class derived from an Odoo template class. They translate directly to database objects, and Odoo automatically takes care of this when installing or upgrading the module. The mechanism responsible for this is Object Relational Model (ORM). Our module will be a very simple application to keep to-do tasks. These tasks will have a single text field for the description and a checkbox to mark them as complete. We should later add a button to clean the to-do list from the old completed tasks. Creating the data model The Odoo development guidelines state that the Python files for models should be placed inside a models subdirectory. For simplicity, we won't be following this here, so let's create a todo_model.py file in the main directory of the todo_app module. Add the following content to it: # -*- coding: utf-8 -*- from odoo import models, fields class TodoTask(models.Model): _name = 'todo.task' _description = 'To-do Task' name = fields.Char('Description', required=True) is_done = fields.Boolean('Done?') active = fields.Boolean('Active?', default=True) The first line is a special marker telling the Python interpreter that this file has UTF-8 so that it can expect and handle non-ASCII characters. We won't be using any, but it's a good practice to have it anyway. The second line is a Python import statement, making available the models and fields objects from the Odoo core. The third line declares our new model. It's a class derived from models.Model. The next line sets the _name attribute defining the identifier that will be used throughout Odoo to refer to this model. Note that the actual Python class name , TodoTask in this case, is meaningless to other Odoo modules. The _name value is what will be used as an identifier. Notice that this and the following lines are indented. If you're not familiar with Python, you should know that this is important: indentation defines a nested code block, so these four lines should all be equally indented. Then we have the _description model attribute. It is not mandatory, but it provides a user friendly name for the model records, that can be used for better user messages. The last three lines define the model's fields. It's worth noting that name and active are special field names. By default, Odoo will use the name field as the record's title when referencing it from other models. The active field is used to inactivate records, and by default, only active records will be shown. We will use it to clear away completed tasks without actually deleting them from the database. Right now, this file is not yet used by the module. We must tell Python to load it with the module in the __init__.py file. Let's edit it to add the following line: from . import todo_model That's it. For our Python code changes to take effect the server instance needs to be restarted (unless it was using the --dev mode). We won't see any menu option to access this new model, since we didn't add them yet. Still we can inspect the newly created model using the Technical menu. In the Settings top menu, go to Technical | Database Structure | Models, search for the todo.task model on the list and then click on it to see its definition: If everything goes right, it is confirmed that the model and fields were created. If you can't see them here, try a server restart with a module upgrade, as described before. We can also see some additional fields we didn't declare. These are reserved fields Odoo automatically adds to every new model. They are as follows: id: A unique, numeric identifier for each record in the model. create_date and create_uid: These specify when the record was created and who created it, respectively. write_date and write_uid: These confirm when the record was last modified and who modified it, respectively. __last_update: This is a helper that is not actually stored in the database. It is used for concurrency checks. The View layer The View layer describes the user interface. Views are defined using XML, which is used by the web client framework to generate data-aware HTML views. We have menu items that can activate the actions that can render views. For example, the Users menu item processes an action also called Users, that in turn renders a series of views. There are several view types available, such as the list and form views, and the filter options made available are also defined by particular type of view, the search view. The Odoo development guidelines state that the XML files defining the user interface should be placed inside a views/ subdirectory. Let's start creating the user interface for our To-Do application. Adding menu items Now that we have a model to store our data, we should make it available on the user interface. For that we should add a menu option to open the To-do Task model so that it can be used. Create the views/todo_menu.xml file to define a menu item and the action performed by it: <?xml version="1.0"?> <odoo> <!-- Action to open To-do Task list --> <act_window id="action_todo_task" name="To-do Task" res_model="todo.task" view_mode="tree,form" /> <!-- Menu item to open To-do Task list --> <menuitem id="menu_todo_task" name="Todos" action="action_todo_task" /> </odoo> The user interface, including menu options and actions, is stored in database tables. The XML file is a data file used to load those definitions into the database when the module is installed or upgraded. The preceding code is an Odoo data file, describing two records to add to Odoo: The <act_window> element defines a client-side window action that will open the todo.task model with the tree and form views enabled, in that order The <menuitem> defines a top menu item calling the action_todo_task action, which was defined before Both elements include an id attribute. This id , also called an XML ID, is very important: it is used to uniquely identify each data element inside the module, and can be used by other elements to reference it. In this case, the <menuitem> element needs to reference the action to process, and needs to make use of the <act_window> id for that. Our module does not know yet about the new XML data file. This is done by adding it to the data attribute in the __manifest__.py file. It holds the list of files to be loaded by the module. Add this attribute to the descriptor's dictionary: 'data': ['views/todo_menu.xml'], Now we need to upgrade the module again for these changes to take effect. Go to the Todos top menu and you should see our new menu option available: Even though we haven't defined our user interface view, clicking on the Todos menu will open an automatically generated form for our model, allowing us to add and edit records. Odoo is nice enough to automatically generate them so that we can start working with our model right away. Odoo supports several types of views, but the three most important ones are: tree (usually called list views), form, and search views. We'll add an example of each to our module. Creating the form view All views are stored in the database, in the ir.ui.view model. To add a view to a module, we declare a <record> element describing the view in an XML file, which is to be loaded into the database when the module is installed. Add this new views/todo_view.xml file to define our form view: <?xml version="1.0"?> <odoo> <record id="view_form_todo_task" model="ir.ui.view"> <field name="name">To-do Task Form</field> <field name="model">todo.task</field> <field name="arch" type="xml"> <form string="To-do Task"> <group> <field name="name"/> <field name="is_done"/> <field name="active" readonly="1"/> </group> </form> </field> </record> </odoo> Remember to add this new file to the data key in manifest file, otherwise our module won't know about it and it won't be loaded. This will add a record to the ir.ui.view model with the identifier view_form_todo_task. The view is for the todo.task model and is named To-do Task Form. The name is just for information; it does not have to be unique, but it should allow one to easily identify which record it refers to. In fact the name can be entirely omitted, in that case it will be automatically generated from the model name and the view type. The most important attribute is arch, and contains the view definition, highlighted in the XML code above. The <form> tag defines the view type, and in this case contains three fields. We also added an attribute to the active field to make it read-only. Adding action buttons Forms can have buttons to perform actions. These buttons are able to trigger workflow actions, run window actions—such as opening another form, or run Python functions defined in the model. They can be placed anywhere inside a form, but for document-style forms, the recommended place for them is the <header> section. For our application, we will add two buttons to run the methods of the todo.task model: <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> The basic attributes of a button comprise the following: The string attribute that has the text to be displayed on the button The type attribute referring to the action it performs The name attribute referring to the identifier for that action The class attribute, which is an optional attribute to apply CSS styles, like in regular HTML The complete form view At this point, our todo.task form view should look like this: <form> <header> <button name="do_toggle_done" type="object" string="Toggle Done" class="oe_highlight" /> <button name="do_clear_done" type="object" string="Clear All Done" /> </header> <sheet> <group name="group_top"> <group name="group_left"> <field name="name"/> </group> <group name="group_right"> <field name="is_done"/> <field name="active" readonly="1" /> </group> </group> </sheet> </form> Remember that for the changes to be loaded to our Odoo database, a module upgrade is needed. To see the changes in the web client, the form needs to be reloaded: either click again on the menu option that opens it or reload the browser page (F5 in most browsers). The action buttons won't work yet, since we still need to add their business logic. The business logic layer Now we will add some logic to our buttons. This is done with Python code, using the methods in the model's Python class. Adding business logic We should edit the todo_model.py Python file to add to the class the methods called by the buttons. First we need to import the new API, so add it to the import statement at the top of the Python file: from odoo import models, fields, api The action of the Toggle Done button will be very simple: just toggle the Is Done? flag. For logic on records, use the @api.multi decorator. Here, self will represent a recordset, and we should then loop through each record. Inside the TodoTask class, add this: @api.multi def do_toggle_done(self): for task in self: task.is_done = not task.is_done return True The code loops through all the to-do task records, and for each one, modifies the is_done field, inverting its value. The method does not need to return anything, but we should have it to at least return a True value. The reason is that clients can use XML-RPC to call these methods, and this protocol does not support server functions returning just a None value. For the Clear All Done button, we want to go a little further. It should look for all active records that are done and make them inactive. Usually, form buttons are expected to act only on the selected record, but in this case, we will want it also act on records other than the current one: @api.model def do_clear_done(self): dones = self.search([('is_done', '=', True)]) dones.write({'active': False}) return True On methods decorated with @api.model, the self variable represents the model with no record in particular. We will build a dones recordset containing all the tasks that are marked as done. Then, we set on the active flag to False on them. The search method is an API method that returns the records that meet some conditions. These conditions are written in a domain, which is a list of triplets. The write method sets the values at once on all the elements of a recordset. The values to write are described using a dictionary. Using write here is more efficient than iterating through the recordset to assign the value to each of them one by one. Set up access security You might have noticed that upon loading, our module is getting a warning message in the server log: The model todo.task has no access rules, consider adding one. The message is pretty clear: our new model has no access rules, so it can't be used by anyone other than the admin super user. As a super user, the admin ignores data access rules, and that's why we were able to use the form without errors. But we must fix this before other users can use our model. Another issue yet to address is that we want the to-do tasks to be private to each user. Odoo supports row-level access rules, which we will use to implement that. Adding access control security To get a picture of what information is needed to add access rules to a model, use the web client and go to Settings | Technical | Security | Access Controls List: Here we can see the ACL for some models. It indicates, per security group, what actions are allowed on records. This information has to be provided by the module using a data file to load the lines into the ir.model.access model. We will add full access to the Employee group on the model. Employee is the basic access group nearly everyone belongs to. This is done using a CSV file named security/ir.model.access.csv. Let's add it with the following content: id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink acess_todo_task_group_user,todo.task.user,model_todo_task,base.group_user,1,1,1,1 The filename corresponds to the model to load the data into, and the first line of the file has the column names. These are the columns provided by the CSV file: id: It is the record external identifier (also known as XML ID). It should be unique in our module. name: This is a description title. It is only informative and it's best if it's kept unique. Official modules usually use a dot-separated string with the model name and the group. Following this convention, we used todo.task.user. model_id: This is the external identifier for the model we are giving access to. Models have XML IDs automatically generated by the ORM: for todo.task, the identifier is model_todo_task. group_id: This identifies the security group to give permissions to. The most important ones are provided by the base module. The Employee group is such a case and has the identifier base.group_user. The last four perm fields flag the access to grant read, write, create, or unlink (delete) access. We must not forget to add the reference to this new file in the __manifest__.py descriptor's data attribute It should look like this: 'data': [ 'security/ir.model.access.csv', 'views/todo_view.xml', 'views/todo_menu.xml', ], As before, upgrade the module for these additions to take effect. The warning message should be gone, and we can confirm that the permissions are OK by logging in with the user demo (password is also demo). If we run our tests now it they should only fail the test_record_rule test case. Summary We created a new module from the start, covering the most frequently used elements in a module: models, the three basic types of views (form, list, and search), business logic in model methods, and access security. Always remember, when adding model fields, an upgrade is needed. When changing Python code, including the manifest file, a restart is needed. When changing XML or CSV files, an upgrade is needed; also, when in doubt, do both: restart the server and upgrade the modules. Resources for Article: Further resources on this subject: Getting Started with Odoo Development [Article] Introduction to Odoo [Article] Web Server Development [Article]
Read more
  • 0
  • 0
  • 41462

article-image-performing-vehicle-telemetry-job-analysis-with-azure-stream-analytics-tools
Sugandha Lahoti
18 Apr 2018
8 min read
Save for later

Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools

Sugandha Lahoti
18 Apr 2018
8 min read
This tutorial is a step-by-step blueprint for a Vehicle Telemetry job analysis on Azure using Streaming Analytics tools for Visual Studio.For connected car and real-time predictive Vehicle Telemetry Analysis, there's a necessity to specify opportunities for new solutions. These opportunities include   How a car could be shipped globally with the required smart hardware to connect to the internet within the next few years. How the embedded connections could define Vehicle Telemetry predictive health status so automotive companies will be able to collect data on the performance of cars, How to send interactive updates and patches to car's instrumentation remotely, How to avoid car equipment damage with precautionary measures with prior notification All these require an intelligent vehicle health telemetry analysis which you can implement using Azure Streaming. Stream Analytics tools for Visual Studio The Stream Analytics tools for Visual Studio help prepare, build, and deploy real-time events on Azure. Optionally, the tools enable you to monitor the streaming job using local sample data as job input testing as well as real-time monitoring, job metrics, diagram view, and so on. This tool provides a complete development setup for the implementation and deployment of real-world Azure Stream Analytics jobs using Visual Studio. Developing a Stream Analytics job using Visual Studio Post the installation of the Stream Analytics tool, a new stream analytics job can be created in Visual Studio. You can get started in Visual Studio IDE from File | New Project. Under the Templates, select Stream Analytics and choose Azure Stream Analytics Application. 2. Next, the job name, project, and solution location should be provided. Under Solution menu, you may also select options such as Add to solution or Create new instance apart from Create new solution from the available drop-down menu during Visual Studio Stream Analytics job creation: 3. Once the ASA job is created, in the Solution Explorer, the job topology folder structure could be viewed as Inputs (job input), Outputs (job output), JobConfig.json, Script.asaql (Stream Analytics Query file), Azure Functions (optional), and so on: 4. Next, provide the job topology data input and output event source settings by selecting Input.json and Output.json from Inputs and Outputs directories, respectively. 5. For a Vehicle Telemetry Predictive Analysis demo using an Azure Stream Analytics job, we need to take two different job data streams. One should be a Stream type for an illimitable sequence of real-time events processed through Azure Event Hub along with Hub policy name, policy key, event serialization format, and so on: Defining a Stream Analytics query for Vehicle Telemetry job analysis using Stream Analytics tools To assign the streaming analytics query definition, the Script.asasql file from the ASA project should be selected by specifying the data and reference stream input joining operation along with supplying analyzed output to Blob storage as configured in job properties. Query to define Vehicle Telemetry (Connected Car) engine health status and pollution index over cities For connected car and real-time predictive Vehicle Telemetry Analysis, there's a necessity to specify opportunities for new solutions in terms of how a car could be shipped globally with the required smart hardware to connect to the internet within the next few years. How the embedded connections could define Vehicle Telemetry predictive health status so automotive companies will be able to collect data on the performance of cars, to send interactive updates and patches to car's instrumentation remotely, and just to avoid car equipment damage with precautionary measures with prior notification through intelligent vehicle health telemetry analysis using Azure Streaming. The solution architecture of the Co:tUlected Car-Vehicle Telemetry Analysis case study used in this demo, with Azure Stream Analytics for real-time predictive analysis, is as follows: Testing Stream Analytics queries locally or in the cloud Azure Stream Analytics tools in Visual Studio offer the flexibility to execute the queries either locally or directly in the cloud. In the Script.asaql file, you need to provide the respective query of your streaming job and test against local input stream/Reference data for query testing before processing in Azure: 2. To run the Stream Analytics job query locally, first select Add Local Input by right-clicking on the ASA project in VS Solution Explorer, and choose to Add Local Input: 3. Define the local input for each Event Hub Data Stream and Blob storage data and execute the job query locally before publishing it in Azure: 4. After adding each local input test data, you can test the Stream Analytics job query locally in VS editor by clicking on the Run Locally button in the top left corner of VS IDE: Vehicle diagnostic Usage-based insurance Engine emission control Engine performance remapping Eco-driving Roadside assistance call Fleet management So, specify the following schema during the designing of a connected car streaming job query with Stream Analytics using parameters such as Vehicle Index no, Model, outside temperature, engine speed, fuel meter, tire pressure, and brake status, by defining INNER join with Event Hub data streams along with Blob storage reference streams containing vehicle model information: Select input.vin, BlobSource.Model, input.timestamp, input.outsideTemperature, input.engineTemperature, input.speed, input.fuel, input.engineoil, input.tirepressure, input.odometer, input.city, input.accelerator_pedal_position, input.parking_brake_status, input.headlamp_status, input.brake_pedal_status, input.transmission_gear_position, input.ignition_status, input.windshield_wiper_status, input.abs into output from input join BlobSource on input.vin = BlobSource.VIN The query could be further customized for complex event processing analysis in terms of defining windowing concepts like Tumbling window function, which assigns equal length non-overlapping series of events in streams with a fixed time slice. The following Vehicle Telemetry analytics query will specify a smart car health index parameter with complex streams from a specified two-second timestamp interval in the form of a fixed length series of events: select BlobSource.Model, input.city,count(vin) as cars, avg(input.engineTemperature) as engineTemperature, avg(input.speed) as Speed, avg(input.fuel) as Fuel, avg(input.engineoil) as EngineOil,avg(input.tirepressure) as TirePressure, avg(input.odometer) as Odometer into EventHubOut from input join BlobSource on input.vin = BlobSource.VIN group by BlobSource.model, input.city, TumblingWindow(second,2) The following Vehicle Telemetry analytics query will specify a smart car health index parameter with complex streams from a specified two-second timestamp interval in the form of a fixed length series of events: 5. The query could be executed locally or submitted to Azure. While running the job locally, a Command Prompt will appear asserting the local Stream Analytics job's running status, with the output data folder location: 6. If run locally, the job output folder would contain two files in the project disk location within the ASALocalRun directory named, with the current date timestamp. Two output files would be present in .csv and .json formats respectively: Now, if submitted the job to Azure from the Stream Analytics project in Visual Studio, it offers a beautiful job dashboard while providing an interactive job diagram view, job metrics graph, and errors (if any). The Vehicle Telemetry Predictive Health Analytics job dashboard in Visual Studio provides a nice job diagram with Real-Time Insights of events, with a display refreshed at a minimum rate of every 30 minutes: The Stream Analytics job metrics graph provides interactive insights on input and output events, out of order events, late events, runtime errors, and data conversion errors related to the job as appropriate: For Connected Car-Predictive Vehicle Telemetry Analytics, you may configure the data input streams processed with complex events by using a definite timestamp interval in a non-overlapping mode such as Tumbling window over a two-second time slicer. The output sink should be configured as Service Bus Event Hub in a data partitioning unit of 32, with a maximum message retention period of 7 days. The output job sink processed events in Event Hub could be archived as well in Azure blob storage for a long-term infrequent access perspective: The Azure Service Bus, Event Hub job output metrics dashboard view configured for vehicle telemetry analysis is as follows: On the left side of the job dashboard, the Job Summary provides a comprehensive view controller of the job parameters such as job status, creation time, job output start time, start mode, last output timestamp, output error handling mechanism provided for quick reference logs, late event arrival tolerance windows, and so on. The job can be stopped and started, deleted, or even refreshed by selecting icons from the top left menu of the job view dashboard in VS: Optionally, a Stream Analytics complete project clone can also be generated by clicking on the Generate Project icon from the top menu of the job dashboard. This article is an excerpt from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. This book provides lessons on Real-time data processing for quick insights using Azure Stream Analytics. Say hello to Streaming Analytics How to get started with Azure Stream Analytics and 7 reasons to choose it  
Read more
  • 0
  • 0
  • 41458
article-image-3-best-practices-to-develop-effective-test-automation-with-selenium
Amey Varangaonkar
30 Mar 2018
5 min read
Save for later

3 best practices to develop effective test automation with Selenium

Amey Varangaonkar
30 Mar 2018
5 min read
In this article, we will look at some of the industry best practices and standards to use in order to develop and maintain effective test automation strategies with Selenium. 1. Naming Convention When developing the framework, it is important to establish some naming convention standards for each type of file created. In general, this is completely subjective. But it is important to establish them upfront so users can use the same file naming conventions for the same file types to avoid confusion later on, when there are many users building them. Here are a few suggestions: Utility classes: Utility classes don't use any prefix or suffix in their names, but do follow Java standards such as having the first letter of each word capitalized, and ending with .java extensions. (Acronyms used can be all caps). Examples include CreateDriver.java, Global_VARS.java, BrowserUtils.java, DataProvider_JSON.java, and so on. Page object classes: It is useful to be able to differentiate the page object classes from the utility classes. A good way to name them is FeaturePO.java, where PO stands for page object and is capitalized, along with the first letter of each word. End the name with a .java extension. Test classes: It is useful to be able to differentiate the test classes from the PO and utility classes. A good way to name them is FeatureTest.java, where Test stands for test class, and the first letter of each word is capitalized. End the name with a .java extension. Data files: Data files are obviously named with an extension for the type of file, such as .json, .csv, .xls, and so on. But, in the case of this framework, the files can be named the same as the corresponding test class, but without the word Test. For example, LoginCredsTest.java would have the data file LoginCreds.json. Setup classes: Usually, there is a common setup class for setup and teardown for all test classes, that can be named AUTSetup.java. So, as an example, GmailSetup.java would be the setup class for all test classes derived from it, and contains only TestNG annotated methods. Test methods: Most test methods in each test class are named using sequential numbering, followed by a feature and action. For example: tc001_gmailLoginCreds, tc002_gmailLoginPassword, and so on. Setup/teardown methods: The setup and teardown methods can be named according to the setup or teardown action they perform. The following naming conventions can be used in conjunction with the TestNG annotations: @BeforeSuite: The suiteSetup method @AfterSuite: The suiteTeardown method @BeforeClass: The classSetup method @AfterClass: The classTeardown method @BeforeMethod: The methodSetup method @AfterMethod: The methodTeardown method 2. Comments Although obvious and somewhat subjective, it is good practice to comment on code when it is not obvious why something is done, there is a complex routine, or there is a "kluge" added to work around a problem. In Java, there are two types of comments used, as well as a set of standards for JavaDoc. We will look at a couple of examples here: [box type="info" align="" class="" width=""]There is an Oracle article on using comments in Java located at http://www. oracle.com/ technetwork/java/codeconventions-141999.html#385[/box] Block comment: /* single line block comment */ code goes here… /* * multi-line block * comment */ code goes here... End-of-line comment: code goes here // end of line comment JavaDoc comments: /** * Description of the method * * @param arg1 to the method * @param arg2 to the method * return value returned from the method */ [box type="info" align="" class="" width=""]The Oracle documentation on using the JavaDoc tool is located at http://www.oracle.com/technetwork/java/javase/documentation/index-137868.html. [/box] 3. Folder names and structures As the framework starts to evolve, there needs to be some organization around the folder structure in the IDE, along with a naming convention. The IntelliJ IDE uses modules to organize the repo, and under those modules, users can create the folder structures. It is common to also separate the page object and utility classes from the test classes. So, as an example, under the top-level folder src, create main/java/com/yourCo/page objects and test/java/com/yourCo/tests folders. From there, under each structure, users can create feature-based folders. Also, to retain a completely independent set of libraries for the Selenium driver and utility classes, create a separate module called something like Selenium3 with the same folder structures. This will allow users to use the same driver class and utilities for any additional modules that are added to the repo/framework. It is common to automate testing for more than one application, and this will allow the inclusion of the module in those additional modules. Here are a few suggestions regarding folder naming conventions: Name all the folders using lowercase names, so there won't be a mix-and-match of different standards. Name the page object class folders after the features they pertain to; for instance, login for the LoginPO.java, email for the GmailPO.java, and so on. Name the test class folders after the same features as the PO classes, but under the test folder. Then there can be a one-to-one correlation between the PO and test class folders. Store the common base classes under a common folder under main. Store the common setup classes under a common folder under test. Store all the utility classes for the AUT under a utils folder under main. Store all the suite files for the tests under a suites folder under test. Here is an example of a folder structure for the Selenium3 module. Of course, there are no test folders under this one: Here is an example of a folder structure for an AUT module showing the PO and test class Folders: You read an excerpt from the book Selenium Framework Design in Data-Driven Testing  written by Carl Cocchiaro. This book presents effective techniques for building data-driven test frameworks using Selenium WebDriver.
Read more
  • 0
  • 0
  • 41407

article-image-role-angularjs
Packt
16 Dec 2014
7 min read
Save for later

Role of AngularJS

Packt
16 Dec 2014
7 min read
This article by Sandeep Kumar Patel, author of Responsive Web Design with AngularJS we will explore the role of AngularJS for responsive web development. Before going into AngularJS, you will learn about responsive "web development in general. Responsive" web development can be performed "in two ways: Using the browser sniffing approach Using the CSS3 media queries approach (For more resources related to this topic, see here.) Using the browser sniffing approach When we view" web pages through our browser, the browser sends a user agent string to the server. This string" provides information like browser and device details. Reading these details, the browser can be redirected to the appropriate view. This method of reading client details is known as browser sniffing. The browser string has a lot of different information about the source from where the request is generated. The following diagram shows the information shared by the user string:   Details of the parameters" present in the user agent string are as follows: Browser name: This" represents the actual name of the browser from where the request has originated, for example, Mozilla or Opera Browser version: This represents" the browser release version from the vendor, for example, Firefox has the latest version 31 Browser platform: This represents" the underlying engine on which the browser is running, for example, Trident or WebKit Device OS: This represents" the operating system running on the device from where the request has originated, for example, Linux or Windows Device processor: This represents" the processor type on which the operating system is running, for example, 32 or 64 bit A different browser string is generated based on the combination of the device and type of browser used while accessing a web page. The following table shows examples of browser "strings: Browser Device User agent string Firefox Windows desktop Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0 Chrome OS X 10 desktop Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.66 Safari/537.36 Opera Windows desktop Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14 Safari OS X 10 desktop Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.13+ (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2 Internet Explorer Windows desktop Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0   AngularJS has features like providers or services which can be most useful for this browser user-agent sniffing and a redirection approach. An AngularJS provider can be created that can be used in the configuration in the routing module. This provider can have reusable properties and reusable methods that can be used to identify the device and route the specific request to the appropriate template view. To discover more about user agent strings on various browser and device combinations, visit http://www.useragentstring.com/pages/Browserlist/. CSS3 media queries approach CSS3 brings a "new horizon to web application development. One of the key features" is media queries to develop a responsive web application. Media queries uses media types and features as "deciding parameters to apply the style to the current web page. Media type CSS3 media queries" provide rules for media types to have different styles applied to a web page. In the media queries specification, media types that should be supported by the implemented browser are listed. These media types are as follows: all: This is used" for all media type devices aural: This is "used for speech and sound synthesizers braille: This is used "for braille tactile feedback devices embossed: This" is used for paged braille printers handheld: This is "used for small or handheld devices, for example, mobile print: This" is used for printers, for example, an A4 size paper document projection: This is" used for projection-based devices, such as a projector screen with a slide screen: This is "used for computer screens, for example, desktop and "laptop screens tty: This is" used for media using a fixed-pitch character grid, such as teletypes and terminals tv: This is used "for television-type devices, for example, webOS "or Android-based television A media rule can be declared using the @media keyword with the specific type for the targeted media. The following code shows an example of the media rule usage, where the background body color" is black and text is white for the screen type media, and background body color is white and text is black for the printer media type: @media screen { body {    background:black;    color:white; } }   @media print{ body {    background:white;    color:black; } } An external style "sheet can be downloaded and applied to the current page based on the media type with the HTML link tag. The following code uses the link type in conjunction with media type: <link rel='stylesheet' media='screen' href='<fileName.css>' /> To learn more about" different media types,visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_types. Media feature Conditional styles can be "applied to a page based on different features of a device. The features that are "supported by CSS3 media queries to apply styles are as follows: color: Based on the" number of bits used for a color component by the device-specific style sheet, this can be applied to a page. color-index: Based "on the color look up, table styles can be applied "to a page. aspect-ratio: Based "on the aspect ratio, display area style sheets can be applied to a page. device-aspect-ratio: Based "on the device aspect ratio, styles can be applied to a page. device-height: Based "on device height, styles can be applied to a page. "This includes the entire screen. device-width: Based "on device width, styles can be applied to a page. "This includes the entire screen. grid: Based "on the device type, bitmap or grid, styles can be applied "to a page. height: Based on" the device rendering area height, styles can be used "to a page. monochrome: Based on" the monochrome type, styles can be applied. "This represents the number of bits used by the device in the grey scale. orientation: Based" on the viewport mode, landscape or portrait, styles can be applied to a page. resolution: Based" on the pixel density, styles can be applied to a page. scan: Based on the "scanning type used by the device for rendering, styles can be applied to a page. width: Based "on the device screen width, specific styles can be applied. The following" code shows some examples" of CSS3 media queries using different device features for conditional styles used: //for screen devices with a minimum aspect ratio 0.5 @media screen and (min-aspect-ratio: 1/2) { img {    height: 70px;    width: 70px; } } //for all device in portrait viewport @media all and (orientation: portrait) { img {    height: 100px;    width: 200px; } } //For printer devices with a minimum resolution of 300dpi pixel density @media print and (min-resolution: 300dpi) { img {    height: 600px;    width: 400px; } } To learn more" about different media features, visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_features. Summary In this chapter, you learned about responsive design and the SPA architecture. "You now understand the role of the AngularJS library when developing a responsive application. We quickly went through all the important features of AngularJS with the coded syntax. In the next chapter, you will set up your AngularJS application and learn to create dynamic routing-based on the devices. Resources for Article:  Further resources on this subject: Best Practices for Modern Web Applications [article] Important Aspect of AngularJS UI Development [article] A look into responsive design frameworks [article]
Read more
  • 0
  • 23
  • 41358

article-image-introduction-ai
Packt
28 Aug 2013
30 min read
Save for later

Introduction to AI

Packt
28 Aug 2013
30 min read
(For more resources related to this topic, see here.) Artificial Intelligence (AI) Living organisms such as animals and humans have some sort of intelligence that helps us in making a particular decision to perform something. On the other hand, computers are just electronic devices that can accept data, perform logical and mathematical operations at high speeds, and output the results. So, Artificial Intelligence (AI) is essentially the subject of making computers able to think and decide like living organisms to perform specific operations. So, apparently this is a huge subject. But it is really important to understand the basics of AI being used in different domains. AI is just a general term; its implementations and applications are different for different purposes, solving different sets of problems. Before we move on to game-specific techniques, we'll take a look at the following research areas in AI applications: Computer vision: It is the ability to take visual input from sources such as videos and cameras, and analyze them to do particular operations such as facial recognition, object recognition, and optical-character recognition. Natural language processing (NLP): It is the ability that allows a machine to read and understand the languages, as we normally write and speak. The problem is that the languages we use today are difficult for machines to understand. There are many different ways to say the same thing, and the same sentence can have different meanings according to the context. NLP is an important step for machines, since they need to understand the languages and expressions we use, before they can process them and respond accordingly. Fortunately, there's an enormous amount of data sets available on the Web that can help researchers to do automatic analysis of a language. Common sense reasoning: This is a technique that our brains can easily use to draw answers even from the domains we don't fully understand. Common sense knowledge is a usual and common way for us to attempt certain questions, since our brains can mix and interplay between the context, background knowledge, and language proficiency. But making machines to apply such knowledge is very complex, and still a major challenge for researchers. AI in games Game AI needs to complement the quality of a game. For that we need to understand the fundamental requirement that every game must have. The answer should be easy. It is the fun factor. So, what makes a game fun to play? This is the subject of game design, and a good reference is The Art of Game Design by Jesse Schell. Let's attempt to tackle this question without going deep into game design topics. We'll find that a challenging game is indeed fun to play. Let me repeat: it's about making a game challenging. This means the game should not be so difficult that it's impossible for the player to beat the opponent, or too easy to win. Finding the right challenge level is the key to make a game fun to play. And that's where the AI kicks in. The role of AI in games is to make it fun by providing challenging opponents to compete, and interesting non-player characters (NPCs) that behave realistically inside the game world. So, the objective here is not to replicate the whole thought process of humans or animals, but to make the NPCs seem intelligent by reacting to the changing situations inside the game world in a way that makes sense to the player. The reason that we don't want to make the AI system in games so computationally expensive is that the processing power required for AI calculations needs to be shared between other operations such as graphic rendering and physics simulation. Also, don't forget that they are all happening in real time, and it's also really important to achieve a steady framerate throughout the game. There were even attempts to create dedicated processor for AI calculations (AI Seek's Intia Processor). With the ever-increasing processing power, we now have more and more room for AI calculations. However, like all the other disciplines in game development, optimizing AI calculations remains a huge challenge for the AI developers. AI techniques In this section, we'll walk through some of the AI techniques being used in different types of games. So, let's just take it as a crash course, before actually going into implementation. If you want to learn more about AI for games, there are some really great books out there, such as Programming Game AI by Example by Mat Buckland and Artificial Intelligence for Games by Ian Millington and John Funge. The AI Game Programming Wisdom series also contain a lot of useful resources and articles on the latest AI techniques. Finite State Machines (FSM) Finite State Machines (FSM) can be considered as one of the simplest AI model form, and are commonly used in the majority of games. A state machine basically consists of a finite number of states that are connected in a graph by the transitions between them. A game entity starts with an initial state, and then looks out for the events and rules that will trigger a transition to another state. A game entity can only be in exactly one state at any given time. For example, let's take a look at an AI guard character in a typical shooting game. Its states could be as simple as patrolling, chasing, and shooting. Simple FSM of an AI guard character There are basically four components in a simple FSM: States: This component defines a set of states that a game entity or an NPC can choose from (patrol, chase, and shoot) Transitions: This component defines relations between different states Rules: This component is used to trigger a state transition (player on sight, close enough to attack, and lost/killed player) Events: This is the component, which will trigger to check the rules (guard's visible area, distance with the player, and so on) So, a monster in Quake 2 might have the following states: standing, walking, running, dodging, attacking, idle, and searching. FSMs are widely used in game AI especially, because they are really easy to implement and more than enough for both simple and somewhat complex games. Using simple if/else statements or switch statements, we can easily implement an FSM. It can get messy, as we start to have more states and more transitions. Random and probability in AI Imagine an enemy bot in an FPS game that can always kill the player with a headshot, an opponent in a racing game that always chooses the best route, and overtakes without collision with any obstacle. Such a level of intelligence will make the game so difficult that it becomes almost impossible to win. On the other hand, imagine an AI enemy that always chooses the same route to follow, or tries to escape from the player. AI controlled entities behaving the same way every time the player encounters them, makes the game predictable and easy to win. Both of the previous situations obviously affect the fun aspect of the game, and make the player feel like the game is not challenging or fair enough anymore. One way to fix this sort of perfect AI and stupid AI is to introduce some errors in their intelligence. In games, randomness and probabilities are applied in the decision making process of AI calculations. The following are the main situations when we would want to let our AI entities change a random decision: Non-intentional: This situation is sometimes a game agent, or perhaps an NPC might need to make a decision randomly, just because it doesn't have enough information to make a perfect decision, and/or it doesn't really matter what decision it makes. Simply making a decision randomly and hoping for the best result is the way to go in such a situation. Intentional: This situation is for perfect AI and stupid AI. As we discussed in the previous examples, we will need to add some randomness purposely, just to make them more realistic, and also to match the difficulty level that the player is comfortable with. Such randomness and probability could be used for things such as hit probabilities, plus or minus random damage on top of base damage. Using randomness and probability we can add a sense of realistic uncertainty to our game and make our AI system somewhat unpredictable. We can also use probability to define different classes of AI characters. Let's look at the hero characters from Defense of the Ancient (DotA), which is a popular action real-time strategy (RTS) game mode of Warcraft III. There are three categories of heroes based on the three main attributes: strength, intelligence, and agility. Strength is the measure of the physical power of the hero, while intellect relates to how well the hero can control spells and magic. Agility defines a hero's ability to avoid attacks and attack quickly. An AI hero from the strength category will have the ability to do more damage during close combat, while an intelligence hero will have more chance of success to score higher damage using spells and magic. Carefully balancing the randomness and probability between different classes and heroes, makes the game a lot more challenging, and makes DotA a lot fun to play. The sensor system Our AI characters need to know about their surroundings, and the world they are interacting with, in order to make a particular decision. Such information could be as follows: Position of the player: This information is used to decide whether to attack or chase, or keep patrolling Buildings and objects nearby: This information is used to hide or take cover Player's health and its own health: This remaining information is used to decide whether to retreat or advance Location of resources on the map in an RTS game: This information is used to occupy and collect resources, required for constructing and producing other units As you can see, it could vary a lot depending on the type of game we are trying to build. So, how do we collect that information? Polling One method to collect such information is polling. We can simply do if/else or switch checks in the FixedUpdate method of our AI character. AI character just polls the information they are interested in from the game world, does the checks, and takes action accordingly. Polling methods works great, if there aren't too many things to check. However, some characters might not need to poll the world states every frame. Different characters might require different polling rates. So, usually in larger games with more complex AI systems, we need to deploy an event-driven method using a global messaging system. The messaging system AI does decision making in response to the events in the world. The events are communicated between the AI entity and the player, the world, or the other AI entities through a messaging system. For example, when the player attacks an enemy unit from a group of patrol guards, the other AI units need to know about this incident as well, so that they can start searching for and attacking the player. If we were using the polling method, our AI entities will need to check the state of all the other AI entities, in order to know about this incident. But with an event-driven messaging system, we can implement this in a more manageable and scalable way. The AI characters interested in a particular event can be registered as listeners, and if that event happens, our messaging system will broadcast to all listeners. The AI entities can then proceed to take appropriate actions, or perform further checks. The event-driven system does not necessarily provide faster mechanism than polling. But it provides a convenient, central checking system that senses the world and informs the interested AI agents, rather than each individual agent having to check the same event in every frame. In reality, both polling and messaging system are used together most of the time. For example, AI might poll for more detailed information when it receives an event from the messaging system. Flocking, swarming, and herding Many living beings such as birds, fish, insects, and land animals perform certain operations such as moving, hunting, and foraging in groups. They stay and hunt in groups, because it makes them stronger and safer from predators than pursuing goals individually. So, let's say you want a group of birds flocking, swarming around in the sky; it'll cost too much time and effort for animators to design the movement and animations of each bird. But if we apply some simple rules for each bird to follow, we can achieve emergent intelligence of the whole group with complex, global behavior. One pioneer of this concept is Craig Reynolds, who presented such a flocking algorithm in his SIGGRAPH paper, 1987, Flocks, Herds and Schools – A Distributed Behavioral Model. He coined the term "boid" that sounds like "bird", but referring to a "bird-like" object. He proposed three simple rules to apply to each unit, which are as follows: Separation: This rule is used to maintain a minimum distance with neighboring boids to avoid hitting them Alignment: This rule is used to align itself with the average direction of its neighbors, and then move in the same velocity with them as a flock Cohesion: This step is used to maintain a minimum distance with the group's center of mass These three simple rules are all that we need to implement a realistic and a fairly complex flocking behavior for birds. They can also be applied to group behaviors of any other entity type with little or no modifications. Path following and steering Sometimes we want our AI characters to roam around in the game world, following a roughly guided or thoroughly defined path. For example in a racing game, the AI opponents need to navigate on the road. And the decision-making algorithms such as our flocking boid algorithm discussed already, can only do well in making decisions. But in the end, it all comes down to dealing with actual movements and steering behaviors. Steering behaviors for AI characters have been in research topics for a couple of decades now. One notable paper in this field is Steering Behaviors for Autonomous Characters, again by Craig Reynolds, presented in 1999 at the Game Developers Conference (GDC). He categorized steering behaviors into the following three layers: Hierarchy of motion behaviors Let me quote the original example from his paper to understand these three layers: "Consider, for example, some cowboys tending a herd of cattle out on the range. A cow wanders away from the herd. The trail boss tells a cowboy to fetch the stray. The cowboy says "giddy-up" to his horse, and guides it to the cow, possibly avoiding obstacles along the way. In this example, the trail boss represents action selection, noticing that the state of the world has changed (a cow left the herd), and setting a goal (retrieve the stray). The steering level is represented by the cowboy who decomposes the goal into a series of simple sub goals (approach the cow, avoid obstacles, and retrieve the cow). A sub goal corresponds to a steering behavior for the cowboy-and-horse team. Using various control signals (vocal commands, spurs, and reins), the cowboy steers his horse towards the target. In general terms, these signals express concepts like go faster, go slower, turn right, turn left, and so on. The horse implements the locomotion level. Taking the cowboy's control signals as input, the horse moves in the indicated direction. This motion is the result of a complex interaction of the horse's visual perception, its sense of balance, and its muscles applying torques to the joints of its skeleton." Then he presented how to design and implement some common and simple steering behaviors for individual AI characters and pairs. Such behaviors include seek and flee, pursue and evade, wander, arrival, obstacle avoidance, wall following, and path following. A* pathfinding There are many games where you can find monsters or enemies that follow the player, or go to a particular point while avoiding obstacles. For example, let's take a look at a typical RTS game. You can select a group of units and click a location where you want them to move or click on the enemy units to attack them. Your units then need to find a way to reach the goal without colliding with the obstacles. The enemy units also need to be able to do the same. Obstacles could be different for different units. For example, an air force unit might be able to pass over a mountain, while the ground or artillery units need to find a way around it. A* (pronounced "A star") is a pathfinding algorithm widely used in games, because of its performance and accuracy. Let's take a look at an example to see how it works. Let's say we want our unit to move from point A to point B, but there's a wall in the way, and it can't go straight towards the target. So, it needs to find a way to point B while avoiding the wall. Top-down view of our map We are looking at a simple 2D example. But the same idea can be applied to 3D environments. In order to find the path from point A to point B, we need to know more about the map such as the position of obstacles. For that we can split our whole map into small tiles, representing the whole map in a grid format, as shown in the following figure: Map represented in a 2D grid The tiles can also be of other shapes such as hexagons and triangles. But we'll just use square tiles here, as that's quite simple and enough for our scenario. Representing the whole map in a grid, makes the search area more simplified, and this is an important step in pathfinding. We can now reference our map in a small 2D array. Our map is now represented by a 5 x 5 grid of square tiles with a total of 25 tiles. We can start searching for the best path to reach the target. How do we do this? By calculating the movement score of each tile adjacent to the starting tile, which is a tile on the map not occupied by an obstacle, and then choosing the tile with the lowest cost. There are four possible adjacent tiles to the player, if we don't consider the diagonal movements. Now, we need to know two numbers to calculate the movement score for each of those tiles. Let's call them G and H, where G is the cost of movement from starting tile to current tile, and H is the cost to reach the target tile from current tile. By adding G and H, we can get the final score of that tile; let's call it F. So we'll be using this formula: F = G + H. Valid adjacent tiles In this example, we'll be using a simple method called Manhattan length (also known as Taxicab geometry), in which we just count the total number of tiles between the starting tile and the target tile to know the distance between them. Calculating G The preceding figure shows the calculations of G with two different paths. We just add one (which is the cost to move one tile) to the previous tile's G score to get the current G score of the current tile. We can give different costs to different tiles. For example, we might want to give a higher movement cost for diagonal movements (if we are considering them), or to specific tiles occupied by, let's say a pond or a muddy road. Now we know how to get G. Let's look at the calculation of H. The following figure shows different H values from different starting tiles to the target tile. You can try counting the squares between them to understand how we get those values. Calculating H So, now we know how to get G and H. Let's go back to our original example to figure out the shortest path from A to B. We first choose the starting tile, and then determine the valid adjacent tiles, as shown in the following figure. Then we calculate the G and H scores of each tile, shown in the lower-left and right corners of the tile respectively. And then the final score F, which is G + H is shown at the top-left corner. Obviously, the tile to the immediate right of the start tile has got the lowest F score. So, we choose this tile as our next movement, and store the previous tile as its parent. This parent stuff will be useful later, when we trace back our final path. Starting position From the current tile, we do the similar process again, determining valid adjacent tiles. This time there are only two valid adjacent tiles at the top and bottom. The left tile is a starting tile, which we've already examined, and the obstacle occupies the right tile. We calculate the G, the H, and then the F score of those new adjacent tiles. This time we have four tiles on our map with all having the same score, six. So, which one do we choose? We can choose any of them. It doesn't really matter in this example, because we'll eventually find the shortest path with whichever tile we choose, if they have the same score. Usually, we just choose the tile added most recently to our adjacent list. This is because later we'll be using some sort of data structure, such as a list to store those tiles that are being considered for the next move. So, accessing the tile most recently added to that list could be faster than searching through the list to reach a particular tile that was added previously. In this demo, we'll just randomly choose the tile for our next test, just to prove that it can actually find the shortest path. Second step So, we choose this tile, which is highlighted with a red border. Again we examine the adjacent tiles. In this step, there's only one new adjacent tile with a calculated F score of 8. So, the lowest score right now is still 6. We can choose any tile with the score 6. Third step So, we choose a tile randomly from all the tiles with the score 6. If we repeat this process until we reach our target tile, we'll end up with a board complete with all the scores for each valid tile. Reach target Now all we have to do is to trace back starting from the target tile using its parent tile. This will give a path that looks something like the following figure: Path traced back So this is the concept of A* pathfinding in a nutshell, without displaying any code. A* is an important concept in the AI pathfinding area, but since Unity 3.5, there are a couple of new features such as automatic navigation mesh generation and the Nav Mesh Agent, which we'll see roughly in the next section and then in more detail later. These features make implementing pathfinding in your games very much easier. In fact, you may not even need to know about A* to implement pathfinding for your AI characters. Nonetheless, knowing how the system is actually working behind the scenes will help you to become a solid AI programmer. Unfortunately, those advanced navigation features in Unity are only available in the Pro version at this moment. A navigation mesh Now we have some idea of A* pathfinding techniques. One thing that you might notice is that using a simple grid in A* requires quite a number of computations to get a path which is the shortest to the target, and at the same time avoids the obstacles. So, to make it cheaper and easier for AI characters to find a path, people came up with the idea of using waypoints as a guide to move AI characters from the start point to the target point. Let's say we want to move our AI character from point A to point B, and we've set up three waypoints as shown in the following figure: Waypoints All we have to do now is to pick up the nearest waypoint, and then follow its connected node leading to the target waypoint. Most of the games use waypoints for pathfinding, because they are simple and quite effective in using less computation resources. However, they do have some issues. What if we want to update the obstacles in our map? We'll also have to place waypoints for the updated map again, as shown in the following figure: New waypoints Following each node to the target can mean the AI character moves in zigzag directions. Look at the preceding figures; it's quite likely that the AI character will collide with the wall where the path is close to the wall. If that happens, our AI will keep trying to go through the wall to reach the next target, but it won't be able to and it will get stuck there. Even though we can smooth out the zigzag path by transforming it to a spline and do some adjustments to avoid such obstacles, the problem is the waypoints don't give any information about the environment, other than the spline connected between two nodes. What if our smoothed and adjusted path passes the edge of a cliff or a bridge? The new path might not be a safe path anymore. So, for our AI entities to be able to effectively traverse the whole level, we're going to need a tremendous number of waypoints, which will be really hard to implement and manage. Let's look at a better solution, navigation mesh. A navigation mesh is another graph structure that can be used to represent our world, similar to the way we did with our square tile-based grid or waypoints graph. Navigation mesh A navigation mesh uses convex polygons to represent the areas in the map that an AI entity can travel. The most important benefit of using a navigation mesh is that it gives a lot more information about the environment than a waypoint system. Now we can adjust our path safely, because we know the safe region in which our AI entities can travel. Another advantage of using a navigation mesh is that we can use the same mesh for different types of AI entities. Different AI entities can have different properties such as size, speed, and movement abilities. A set of waypoints is tailored for human, AI may not work nicely for flying creatures or AI controlled vehicles. Those might need different sets of waypoints. Using a navigation mesh can save a lot of time in such cases. But generating a navigation mesh programmatically based on a scene, is a somewhat complicated process. Fortunately, Unity 3.5 introduced a built-in navigation mesh generator (Pro only feature). Instead, we'll learn how to use Unity's navigation mesh for generating features to easily implement our AI pathfinding. The behavior trees Behavior trees are the other techniques used to represent and control the logic behind AI characters. They have become popular for the applications in AAA games such as Halo and Spore. Previously, we have briefly covered FSM. FSMs provide a very simple way to define the logic of an AI character, based on the different states and transitions between them. However, FSMs are considered difficult to scale and re-use existing logic. We need to add many states and hard-wire many transitions, in order to support all the scenarios, which we want our AI character to consider. So, we need a more scalable approach when dealing with large problems. behavior trees are a better way to implement AI game characters that could potentially become more and more complex. The basic elements of behavior trees are tasks, where states are the main elements for FSMs. There are a few different tasks such as Sequence, Selector, and Parallel Decorator. This is quite confusing. The best way to understand this is to look at an example. Let's try to translate our example from the FSM section using a behavior tree. We can break all the transitions and states into tasks. Tasks Let's look at a Selector task for this Behavior tree. Selector tasks are represented with a circle and a question mark inside. First it'll choose to attack the player. If the Attack task returns success, the Selector task is done and will go back to the parent node, if there is one. If the Attack task fails, it'll try the Chase task. If the Chase task fails, it'll try the Patrol task. Selector task What about the tests? They are also one of the tasks in the behavior trees. The following diagram shows the use of Sequence tasks, denoted by a rectangle with an arrow inside it. The root selector may choose the first Sequence action. This Sequence action's first task is to check whether the player character is close enough to attack. If this task succeeds, it'll proceed with the next task, which is to attack the player. If the Attack task also returns success, the whole sequence will return success, and the selector is done with this behavior, and will not continue with other Sequence tasks. If the Close enough to attack? task fails, then the Sequence action will not proceed to the Attack task, and will return a failed status to the parent selector task. Then the selector will choose the next task in the sequence, Lost or Killed Player?. Sequence tasks The other two common components are Parallel and Decorator. A Parallel task will execute all of its child tasks at the same time, while the Sequence and Selector tasks only execute their child tasks one by one. Decorator is another type of task that has only one child. It can change the behavior of its own child's tasks, which includes whether to run its child's task or not, how many times it should run, and so on. We'll study how to implement a basic behavior tree system later. There's a free add-on for Unity called Behave in the Unity Asset Store. Behave is a useful, free GUI editor to set up behavior trees of AI characters, and we'll look at it in more detail later as well. Locomotion Animals (including humans) have a very complex musculoskeletal system (the locomotor system) that gives them the ability to move around the body using the muscular and skeletal systems. We know where to put our steps when climbing a ladder, stairs, or on uneven terrain, and we know how to balance our body to stabilize all the fancy poses we want to make. We can do all this using our bones, muscles, joints, and other tissues, collectively described as our locomotor system. Now put that into our game development perspective. Let's say we've a human character who needs to walk on both even and uneven surfaces, or on small slopes, and we have only one animation for a "walk" cycle. With the lack of a locomotor system in our virtual character, this is how it would look: Climbing stair without locomotion First we play the walk animation and advance the player forward. Now the character knows it's penetrating the surface. So, the collision detection system will pull the character up above the surface to prevent this penetration. This is how we usually set up the movement on an uneven surface. Even though it doesn't give a realistic look and feel, it does the job and is cheap to implement. Let's take a look at how we really walk up stairs. We put our step firmly on the staircase, and using this force we pull up the rest of our body for the next step. This is how we do it in real life with our advanced locomotor system. However, it's not so simple to implement this level of realism inside games. We'll need a lot of animations for different scenarios, which include climbing ladders, walking/running up stairs, and so on. So, only the large studios with a lot of animators could pull this off in the past, until we came up with an automated system. With a locomotion system Fortunately, Unity 3D has an extension that can do just that, which is a locomotion system. Locomotion system Unity extension This system can automatically blend our animated walk/run cycles, and adjust the movements of the bones in the legs to ensure that the feet step correctly on the ground. It can also adjust the original animations made for a specific speed and direction on any surface, arbitrary steps, and slopes. We'll see how to use this locomotion system to apply realistic movement to our AI characters. Dijkstra's algorithm The Dijkstra's algorithm, named after professor Edsger Dijkstra, who devised the algorithm, is one of the most famous algorithms for finding the shortest paths in a graph with non-negative edge path costs. The algorithm was originally designed to solve the shortest path problem in the context of mathematical graph theory. And it's designed to find all the shortest paths from a starting node to all the other nodes in the graph. Since most of the games only need the shortest path between one starting point and one target point, all the other paths generated or found by this algorithm are not really useful. We can stop the algorithm, once we find the shortest path from a single starting point to a target point. But still it'll try to find all the shortest paths from all the points it has visited. So, this algorithm is not efficient enough to be used in most games. And we won't be doing a Unity demo of Dijkstra's algorithm in this article as well. However, Dijkstra's algorithm is an important algorithm for the games that require strategic AI that needs as much information as possible about the map to make tactical decisions. It has many applications other than games, such as finding the shortest path in network routing protocols. Summary Game AI and academic AI have different objectives. Academic AI researches try to solve real-world problems, and prove a theory without much limited resources. Game AI focuses on building NPCs within limited resources that seems to be intelligent to the player. Objective of AI in games is to provide a challenging opponent that makes the game more fun to play with. We also learned briefly about the different AI techniques that are widely used in games such as finite state machines (FSMs), random and probability, sensor and input system, flocking and group behaviors, path following and steering behaviors, AI path finding, navigation mesh generation, and behavior trees. Resources for Article: Further resources on this subject: Unity 3-0 Enter the Third Dimension [Article] Introduction to Game Development Using Unity 3D [Article] Unity Game Development: Welcome to the 3D world [Article]
Read more
  • 0
  • 0
  • 41338
article-image-applying-styles-to-material-ui-components-in-react-tutorial
Bhagyashree R
18 May 2019
14 min read
Save for later

Applying styles to Material-UI components in React [Tutorial]

Bhagyashree R
18 May 2019
14 min read
The majority of styles that are applied to Material-UI components are part of the theme styles. In some cases, you need the ability to style individual components without changing the theme. For example, a button in one feature might need a specific style applied to it that shouldn't change every other button in the app. Material-UI provides several ways to apply custom styles to components as a whole, or to specific parts of components. This article is taken from the book React Material-UI Cookbook by Adam Boduch by Adam Boduch.  This book will serve as your ultimate guide to building compelling user interfaces with React and Material Design. Filled with practical and to-the-point recipes, you will learn how to implement sophisticated-UI components. To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will look at the various styling solutions to design appealing user interfaces including basic component styles, scoped component styles, extending component styles, moving styles to themes, and others. Basic component styles Material uses JavaScript Style Sheets (JSS) to style its components. You can apply your own JSS using the utilities provided by Material-UI. How to do it... The withStyles() function is a higher-order function that takes a style object as an argument. The function that it returns takes the component to style as an argument. Here's an example: import React, { useState } from 'react';import { withStyles } from '@material-ui/core/styles';import Card from '@material-ui/core/Card';import CardActions from '@material-ui/core/CardActions';import CardContent from '@material-ui/core/CardContent';import Button from '@material-ui/core/Button';import Typography from '@material-ui/core/Typography';const styles = theme => ({card: {width: 135,height: 135,textAlign: 'center'},cardActions: {justifyContent: 'center'}});const BasicComponentStyles = withStyles(styles)(({ classes }) => {const [count, setCount] = useState(0);const onIncrement = () => {setCount(count + 1);};return (<Card className={classes.card}><CardContent><Typography variant="h2">{count}</Typography></CardContent><CardActions className={classes.cardActions}><Button size="small" onClick={onIncrement}>Increment</Button></CardActions></Card>);});export default BasicComponentStyles; Here's what this component looks like: How it works... Let's take a closer look at the styles defined by this example: const styles = theme => ({ card: { width: 135, height: 135, textAlign: 'center' }, cardActions: { justifyContent: 'center' } }); The styles that you pass to withStyles() can be either a plain object or a function that returns a plain object, as is the case with this example. The benefit of using a function is that the theme values are passed to the function as an argument, in case your styles need access to the theme values. There are two styles defined in this example: card and cardActions. You can think of these as Cascading Style Sheets (CSS) classes. Here's what these two styles would look like as CSS: .card { width: 135 height: 135 text-align: center}.cardActions {justify-content: center} By calling withStyles(styles)(MyComponent), you're returning a new component that has a classes property. This object has all of the classes that you can apply to components now. You can't just do something such as this: <Card className="card" /> When you define your styles, they have their own build process and every class ends up getting its own generated name. This generated name is what you'll find in the classes object, so this is why you would want to use it. There's more... Instead of working with higher-order functions that return new components, you can leverage Material-UI style hooks. This example already relies on the useState() hook from React, so using another hook in the component feels like a natural extension of the same pattern that is already in place. Here's what the example looks like when refactored to take advantage of the makeStyles() function: import React, { useState } from 'react';import { makeStyles } from '@material-ui/styles';import Card from '@material-ui/core/Card';import CardActions from '@material-ui/core/CardActions';import CardContent from '@material-ui/core/CardContent';import Button from '@material-ui/core/Button';import Typography from '@material-ui/core/Typography';const useStyles = makeStyles(theme => ({card: {width: 135,height: 135,textAlign: 'center'},cardActions: {justifyContent: 'center'}}));export default function BasicComponentStyles() {const classes = useStyles();const [count, setCount] = useState(0);const onIncrement = () => {setCount(count + 1);};return (<Card className={classes.card}><CardContent><Typography variant="h2">{count}</Typography></CardContent><CardActions className={classes.cardActions}><Button size="small" onClick={onIncrement}>Increment</Button></CardActions></Card>);}The useStyles() hook is built using the makeStyles() function—which takes the exact same styles argument as withStyles(). By calling useStyles() within the component, you have your classes object. Another important thing to point out is that makeStyles is imported from @material-ui/styles, not @material-ui/core/styles. Scoped component styles Most Material-UI components have a CSS API that is specific to the component. This means that instead of having to assign a class name to the className property for every component that you need to customize, you can target specific aspects of the component that you want to change. Material-UI has laid the foundation for scoping component styles; you just need to leverage the APIs. How to do it... Let's say that you have the following style customizations that you want to apply to the Button components used throughout your application: Every button needs a margin by default. Every button that uses the contained variant should have additional top and bottom padding. Every button that uses the contained variant and the primary color should have additional top and bottom padding, as well as additional left and right padding. Here's an example that shows how to use the Button CSS API to target these three different Button types with styles: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes: { root, contained, containedPrimary } }) => (<Fragment><Button classes={{ root }}>My Default Button</Button><Button classes={{ root, contained }} variant="contained">My Contained Button</Button><Buttonclasses={{ root, contained, containedPrimary }}variant="contained"color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; Here's what the three rendered buttons look like: How it works... The Button CSS API takes named styles and applies them to the component. These same names are used in the styles in this code. For example, root applies to every Button component, whereas contained only applies the styles to the Button components that use the contained variant and the containedPrimary style only applies to Button components that use the contained variant and the primary color. There's more... Each style is destructured from the classes property, then applied to the appropriate Button component. However, you don't actually need to do all of this work. Since the Material-UI CSS API takes care of applying styles to components in a way that matches what you're actually targeting, you can just pass the classes directly to the buttons and get the same result. Here's a simplified version of this example: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes }) => (<Fragment><Button classes={classes}>My Default Button</Button><Button classes={classes} variant="contained">My Contained Button</Button><Button classes={classes} variant="contained" color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; The output looks the same because only buttons that match the constraints of the CSS API get the styles applied to them. For example, the first Button has the root, contained, and containedPrimary styles passed to the classes property, but only root is applied because it isn't using the contained variant of the primary color. The second Button also has all three styles passed to it, but only root and contained are applied. The third Button has all three styles applied to it because it meets the criteria of each style. Extending component styles You can extend styles that you apply to one component with styles that you apply to another component. Since your styles are JavaScript objects, one option is to extend one style object with another. The only problem with this approach is that you end up with a lot of duplicate styles properties in the CSS output. A better alternative is to use the jss extend plugin. How to do it... Let's say that you want to render three buttons and share some of the styles among them. One approach is to extend generic styles with more specific styles using the jss extend plugin. Here's how to do it: import React, { Fragment } from 'react';import { JssProvider, jss } from 'react-jss';import {withStyles,createGenerateClassName} from '@material-ui/styles';import {createMuiTheme,MuiThemeProvider} from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {extend: 'root',paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {extend: 'contained',paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const App = ({ children }) => (<JssProviderjss={jss}generateClassName={createGenerateClassName()}><MuiThemeProvider theme={createMuiTheme()}>{children}</MuiThemeProvider></JssProvider>);const Buttons = withStyles(styles)(({ classes }) => (<Fragment><Button className={classes.root}>My Default Button</Button><Button className={classes.contained} variant="contained">My Contained Button</Button><ButtonclassName={classes.containedPrimary}variant="contained"color="primary">My Contained Primary Button</Button></Fragment>));const ExtendingComponentStyles = () => (<App><Buttons /></App>);export default ExtendingComponentStyles; Here's what the rendered buttons look like: How it works... The easiest way to use the jss extend plugin in your Material-UI application is to use the default JSS plugin presets, which includes jss extend. Material-UI has several JSS plugins installed by default, but jss extend isn't one of them. Let's take a look at the App component in this example to see how this JSS plugin is made available: const App = ({ children }) => ( <JssProvider jss={jss} generateClassName={createGenerateClassName()} > <MuiThemeProvider theme={createMuiTheme()}> {children} </MuiThemeProvider> </JssProvider> ); The JssProvider component is how JSS is enabled in Material-UI applications. Normally, you wouldn't have to interface with it directly, but this is necessary when adding a new JSS plugin. The jss property takes the JSS preset object that includes the jss extend plugin. The generateClassName property takes a function from Material-UI that helps generate class names that are specific to Material-UI. Next, let's take a closer look at some styles: const styles = theme => ({ root: { margin: theme.spacing(2) }, contained: { extend: 'root', paddingTop: theme.spacing(2), paddingBottom: theme.spacing(2) }, containedPrimary: { extend: 'contained', paddingLeft: theme.spacing(4), paddingRight: theme.spacing(4) } }); The extend property takes the name of a style that you want to extend. In this case, the contained style extends root. The containedPrimary extends contained and root. Now let's take a look at how this translates into CSS. Here's what the root style looks like: .Component-root-1 { margin: 16px; } Next, here's the contained style: .Component-contained-2 { margin: 16px; padding-top: 16px; padding-bottom: 16px; } Finally, here's the containedPrimary style: .Component-containedPrimary-3 { margin: 16px; padding-top: 16px; padding-left: 32px; padding-right: 32px; padding-bottom: 16px; } Note that the properties from the more-generic properties are included in the more-specific styles. There are some properties duplicated, but this is in CSS, instead of having to duplicate JavaScript object properties. Furthermore, you could put these extended styles in a more central location in your code base, so that multiple components could use them. Moving styles to themes As you develop your Material-UI application, you'll start to notice style patterns that repeat themselves. In particular, styles that apply to one type of component, such as buttons, evolve into a theme. How to do it... Let's revisit the example from the Scoped component styles section: import React, { Fragment } from 'react';import { withStyles } from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const styles = theme => ({root: {margin: theme.spacing(2)},contained: {paddingTop: theme.spacing(2),paddingBottom: theme.spacing(2)},containedPrimary: {paddingLeft: theme.spacing(4),paddingRight: theme.spacing(4)}});const ScopedComponentStyles = withStyles(styles)(({ classes }) => (<Fragment><Button classes={classes}>My Default Button</Button><Button classes={classes} variant="contained">My Contained Button</Button><Button classes={classes} variant="contained" color="primary">My Contained Primary Button</Button></Fragment>));export default ScopedComponentStyles; Here's what these buttons look like after they have these styles applied to them: Now, let's say you've implemented these same styles in several places throughout your app because this is how you want your buttons to look. At this point, you've evolved a simple component customization into a theme. When this happens, you shouldn't have to keep implementing the same styles over and over again. Instead, the styles should be applied automatically by using the correct component and the correct property values. Let's move these styles into theme: import React from 'react';import {createMuiTheme,MuiThemeProvider} from '@material-ui/core/styles';import Button from '@material-ui/core/Button';const defaultTheme = createMuiTheme();const theme = createMuiTheme({overrides: {MuiButton: {root: {margin: 16},contained: {paddingTop: defaultTheme.spacing(2),paddingBottom: defaultTheme.spacing(2)},containedPrimary: {paddingLeft: defaultTheme.spacing(4),paddingRight: defaultTheme.spacing(4)}}}});const MovingStylesToThemes = ({ classes }) => (<MuiThemeProvider theme={theme}><Button>My Default Button</Button><Button variant="contained">My Contained Button</Button><Button variant="contained" color="primary">My Contained Primary Button</Button></MuiThemeProvider>);export default MovingStylesToThemes; Now, you can use Button components without having to apply the same styles every time. How it works... Let's take a closer look at how your styles fit into a Material-UI theme: overrides: { MuiButton: { root: { margin: 16 }, contained: { paddingTop: defaultTheme.spacing(2), paddingBottom: defaultTheme.spacing(2) }, containedPrimary: { paddingLeft: defaultTheme.spacing(4), paddingRight: defaultTheme.spacing(4) } } } The overrides property is an object that allows you to override the component-specific properties of the theme. In this case, it's the MuiButton component styles that you want to override. Within MuiButton, you have the same CSS API that is used to target specific aspects of components. This makes moving your styles into the theme straightforward because there isn't much to change. One thing that did have to change in this example is the way spacing works. In normal styles that are applied via withStyles(), you have access to the current theme because it's passed in as an argument. You still need access to the spacing data, but there's no theme argument because you're not in a function. Since you're just extending the default theme, you can access it by calling createMuiTheme() without any arguments, as this example shows. This article explored some of the ways you can apply styles to Material-UI components of your React applications. There are many other styling options available to your Material-UI app beyond withStyles(). There's the styled() higher-order component function that emulates styled components. You can also jump outside the Material-UI style system and use inline CSS styles or import CSS modules and apply those styles. If you found this post useful, do check out the book, React Material-UI Cookbook by Adam Boduch.  This book will help you build modern-day applications by implementing Material Design principles in React applications using Material-UI. Keeping animations running at 60 FPS in a React Native app [Tutorial] React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial] Building a Progressive Web Application with Create React App 2 [Tutorial]
Read more
  • 0
  • 0
  • 41291

article-image-installing-php-nuke
Packt
22 Feb 2010
10 min read
Save for later

Installing PHP-Nuke

Packt
22 Feb 2010
10 min read
The steps to install and configure PHP-Nuke are simple: Download and extract the PHP-Nuke files. Download and apply ChatServ's patches. Create the database for PHP-Nuke. Create a database user, and fill the database with data. Make some simple changes to the PHP-Nuke configuration file. Copy the PHP-Nuke files to the document root of the web server. Test it out! Let's get started. Downloading PHP-Nuke The latest version of PHP-Nuke can be downloaded at phpnuke.org downloads page: http://www.phpnuke.org/modules.php?name=Downloads&d_op=viewdownload&cid=1 You can also obtain older versions of PHP-Nuke, including version 1.0, from SourceForge: http://sourceforge.net/project/showfiles.php?group_id=7511&package_id=7622 SourceForge is the world's largest home of open-source projects. Many projects use SourceForge's facilities to host and maintain their projects. You can find almost anything you want on SourceForge—whether it is in a usable state or has been updated recently is another matter. Extracting PHP-Nuke Once you have downloaded PHP-Nuke, you should extract the contents of the PHP-Nuke ZIP archive to the root of your c: drive. You will have to create a folder called PHP-Nuke-7.8 in the root of your c: drive. (If you extract the files elsewhere, create the folder PHP-Nuke-7.8 and copy the contents of the main unzipped folder to this new folder). If you don't have a tool for extracting the files, you can download an evaluation edition (or buy a full edition) of WinZip from www.winzip.com. There are also free, powerful, extracting tools such as ZipGenius (http://www.zipgenius.it/index_eng.htm) and 7-Zip (http://sourceforge.net/projects/sevenzip/) among others. In the PHP-Nuke-7.8 folder, you will find three subfolders called html, sql, and upgrades. The upgrades folder contains scripts that handle upgrading the database between different versions, the sql folder contains the definition of the PHP-Nuke database that we will be working with, and the html folder contains the guts of your PHP-Nuke installation. The html folder contains all the PHP scripts, HTML files, images, CSS stylesheets, and so on that drive PHP-Nuke. Within the html folder are further subfolders; some of these include: modules: Contains the modules that make up your PHP-Nuke site. Modules are the essence of PHP-Nuke's operation; we look at them from article Your First Page onwards. blocks: Contains PHP-Nuke's blocks. Blocks are 'mini-functionality' units and usually provide snippet views of modules. We will look at blocks in article Managing the Site. language: Contains PHP-Nuke language files. These allow the language of PHP-Nuke's interface to be changed. images: Contains images used in the display of the PHP-Nuke site. themes: Contains the themes for PHP-Nuke. The use of themes allows you to completely change the look of a PHP-Nuke site with a click of a button. includes, db: Contain code to support the running of PHP-Nuke. The db folder, for example, contains database access code. admin: Contains code to power the administration area of your site. Downloading the Patches No software is without its flaws, and PHP-Nuke is no exception. After a release, the large user community invariably finds problems and potential security holes. Furthermore, PHP-Nuke also contains features such as its forum, which is in fact the phpBB application specially modified to work with PHP-Nuke. phpBB itself is updated on a regular basis to correct critical security vulnerabilities or to fix other problems, and consequently the corresponding part of PHP-Nuke also needs to be updated. Rather than releasing a new version of PHP-Nuke for these situations, patches for its various parts are released. ChatServ's patches from www.nukeresources.com are mostly concerned with variable validation, in other words, making sure that the variables used in the application are of the right type for storing in the database. This has been an area of weakness with many earlier versions of PHP-Nuke. The patches are often incorporated into subsequent versions of PHP-Nuke so that each new version becomes more robust. Note that you don't have to apply the patches, and PHP-Nuke will still work without them. However, by applying them you will have taken a good step towards improving the security of your site. If you navigate to http://www.nukeresources.com, there is a handy menu on the front page to access the patches: To obtain the patch corresponding to your version, click the link and you will be taken to the relevant file (of course, www.nukeresources is a PHP-Nuke powered site!). Click on the Nuke 7.8 link to go to the Downloads page of www.nukeresources.com. On this page, clicking the Download this file Now! link will download the patches for PHP-Nuke 7.8. The name of this file will be of the form 78patched.tar.gz. This is a GZIP compressed file that contains all the patches that we are about to apply. The GZIP file can be extracted with WinZip, or any of the other utilities we discussed earlier. The patches are simply modified versions of the original PHP-Nuke files. The original files have been modified to address various security issues that may have been identified since the initial release, or maybe since the last version of the patch. Applying the Patches To apply the patches, first we need to extract the 78patched.tar.gz file. We will extract the files into a folder called patches that we will create in the PHP-Nuke-7.8 folder. After extracting the files, copy the contents of the patches folder to your html folder. Do not copy the patches folder, copy its contents. The patches folder contains files that replace the files in the default installation, and you get a Confirm File Replace window. Select Yes for all the files, and when the copying is complete, your installation is ready to go. We have performed this patching immediately after installing PHP-Nuke, but we could have done this at any time. Doing it at this point is more sensible as it means that we are working on the most secure version of PHP-Nuke. Also, the patch process we have described here overwrites existing PHP-Nuke installation files. If you have modified these files, then the changes will be lost on applying the patch. Thus applying the patches later without disturbing any of your changes becomes more demanding. There is one further thing to watch for after applying the patches. You may find that the patched files have had their permissions set to read-only, and that you are unable to modify the files. To modify the files (and we do have to modify at least the config.php file in this article) you will need to remove this setting. You can do this on Windows by right-clicking on the file, selecting Properties from the menu, unchecking the Read-only setting, and clicking the OK button: We've done almost all the work with the files that we need to; now we turn our attention to creating and populating PHP-Nuke's database. Preparing the PHP-Nuke Database We'll be using the phpMyAdmin tool to do our database work. phpMyAdmin is part of the XAMPP installation (detailed in Appendix A), or can be downloaded from www.phpmyadmin.net, if you don't already have it. phpMyAdmin provides a powerful web interface for working with your MySQL databases. First of all, open your browser and navigate to http://localhost/phpmyadmin/, or whatever the location of your phpMyAdmin installation is: Creating the Database We need to create an empty database for PHP-Nuke to hold all the data about our site. To do this, we simply enter a name for our database into the Create new database textbox: We will call our database nuke. Enter this, and click the Create button. The name you give doesn't particularly matter, as long as it is not the name of some already existing database. If you try to use the same name as an already existing database, phpMyAdmin will inform you of this, and no action will be taken. The exact name isn't particularly important at this point because there is another configuration step coming up, which requires us to tell PHP-Nuke the name of the database we've created for it. After clicking Create, the screen will reload and you will be notified of the successful creation of your database: Creating a Database User Before we start populating the database, we will create a database user that can access only the PHP-Nuke database. This user is not a human, but will be used by PHP-Nuke to connect to the database while it performs its data-handling activities. The advantage of creating a database user is that it adds an extra level of security to our installation. PHP-Nuke will be able to work with data only in this database of the MySQL server, and no other. Also, PHP-Nuke will be restricted in the operations it can perform on the tables in the database. We will need to create a username for this boxed-in user to access the nuke database. Let's call our user nuker and go with the password nukepassword. However, in order to add an extra level of security we will introduce some digits into nukepassword, and some other slight twists, to strengthen it, and so use the word No0kPassv0rd as our database user password. To create the database user, click the SQL tab, and enter the following into the Run SQL query/queries on database textbox: GRANT ALL PRIVILEGES ON nuke.* TO nuker@localhost IDENTIFIED BY 'No0kPassv0rd' WITH GRANT OPTION Your screen should look like this: Click the Go button, and the database user will be created: Populating the Database Now we are ready to fill our database with data for PHP-Nuke. This doesn't mean we start typing the data in ourselves; the data comes with the PHP-Nuke installation. This data is found in a file called nuke.sql in the sql folder of the PHP-Nuke installation. This file contains a number of SQL statements that define the tables within the database and also fill them with 'raw' data for the site. However, before we fill the database with the tables from this file, we need to make a modification to this file. By default, the name of each database table in PHP-Nuke begins with nuke_. For example, there is a table with the name nuke_stories that holds information about stories, and a table called nuke_topics that holds information about story topics. These are just two of the tables; there are more than 90 in the standard installation. The word nuke_ is a 'table prefix', and is used to ensure that there are no clashes between the names of PHP-Nuke's tables and tables from another application in the same database, since the rest of the table name is descriptive of the data stored in the table, and other applications may have similarly named tables. What this does mean is that unless this table prefix is changed, the table names in your PHP-Nuke database will be known to anyone attempting to hack your site. Many of the typical attacks used to damage PHP-Nuke are based around the fact that the names of the tables in the database powering a PHP-Nuke site are known. By changing the table prefix to something less obvious, you have taken another step to making your site more secure.
Read more
  • 0
  • 8
  • 41236
Modal Close icon
Modal Close icon