Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-getting-to-know-pymc3-a-probabilistic-programming-framework-for-bayesian-analysis-in-python
Vincy Davis
11 Dec 2019
5 min read
Save for later

Getting to know PyMC3, a probabilistic programming framework for Bayesian Analysis in Python

Vincy Davis
11 Dec 2019
5 min read
Bayes' theorem, named after 18th-century British mathematician Thomas Bayes, is a mathematical formula for determining conditional probability. This theorem is used to revise or update existing predictions or theories using new or additional evidence. Bayes theorem is also used in the field of data science as it provides a rule for moving from a prior probability to a posterior probability.  In Bayesian statistics, a prior probability is the probability of an event before a new data is collected and a posterior probability is a conditional probability that is allotted after the relevant evidence is acquired. Hence, the Bayes algorithm is one of the most popular machine learning techniques in the field of data science.  In this post, we are going to discuss a specific Bayesian implementation called probabilistic programming (PP) in Python, considering that modern Bayesian statistics is mainly done by writing code. The probabilistic programming enables flexible specification of complex Bayesian statistical models, thus giving users the ability to focus more on model design, evaluation, and interpretation, and less on mathematical or computational details. Further Reading [box type="shadow" align="" class="" width=""]To know more about Bayesian data analysis techniques using PyMC3 and ArviZ, read our book ‘Bayesian Analysis with Python’, written by Osvaldo Martin. This book will help you acquire skills for a practical and computational approach towards Bayesian statistical modeling. The book also lists the best practices in Bayesian Analysis with the help of sample problems and practice exercises.[/box] A group of researchers have published a paper “Probabilistic Programming in Python using PyMC” exhibiting a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. PyMC3 is a popular open-source PP framework in Python with an intuitive and powerful syntax closer to the natural syntax statisticians. The PyMC3 installation depends on several third-party Python packages which are automatically installed when installing via pip. It requires four dependencies: Theano, NumPy, SciPy, and Matplotlib. To undertake the full advantage of PyMC3, the researchers suggest, the optional dependencies Pandas and Patsy should also be installed using: pip install patsy pandas. How to use PyMC3 in probabilistic programming? In the paper, the researchers have utilized a simple Bayesian linear regression model with normal priors for the parameters. The unknown variables in the model are also assigned a prior distribution. The artificial data in the model are then simulated using NumPy’s random module, followed by the PyMC3 model to retrieve the corresponding parameters. The straightforward PyMC3 model structure is used to generate the unknown data as it is close to the statistical notation.  Firstly, the necessary components are imported from PyMC to build the required model. It is represented in the full format initially and then explained partly. The paper states, “Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement: with basic_model: This creates a context manager, with our basic model as the context, that includes all statements until the indented block ends.” This means that all the PyMC3 objects introduced in the indented code block below the with statements are added to the model behind the scenes. In the absence of this context manager idiom, users would be forced to manually associate each of the variables with the basic model immediately after we create them. Also, if a user tries to create a new random variable without a with model: statement, it will cause an error due to the absence of an obvious model for the variable to be added to.  Next, to obtain posterior estimates for the unknown variables in the model, the posterior estimates are calculated analytically. The researchers have explained two approaches to obtain posterior estimates, users can choose either of them depending on the structure of the model and the goals of the analysis. The first approach is called finding the maximum a posteriori (MAP) point using optimization methods and the second approach is computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods. For producing a posterior analysis of the required model, PyMC3 provides plotting and summarization functions for inspecting the sampling output.  A simple posterior plot can be created using traceplot. In the traceplot, the left column consists of the smoothed histogram while the right column contains the samples of the Markov chain plotted in sequential order. In addition, the summary function of PyMC3 also provides a text-based output of common posterior statistics. You can also learn more about the practical implementation of PyMC3 and its loss functions in the book ‘Bayesian Analysis with Python’ by Packt Publishing. How Facebook data scientists use Bayesian optimization for tuning their online systems How to perform exception handling in Python with ‘try, catch and finally’ Fake Python libraries removed from PyPi when caught stealing SSH and GPG keys, reports ZDNet Netflix open-sources Metaflow, its Python framework for building and managing data science projects ActiveState adds thousands of curated Python packages to its platform
Read more
  • 0
  • 0
  • 45584

article-image-inspecting-apis-in-asp-net-core-tutorial
Prasad Ramesh
24 Feb 2019
7 min read
Save for later

Inspecting APIs in ASP.NET Core [Tutorial]

Prasad Ramesh
24 Feb 2019
7 min read
REST is an architectural style for implementing communication between the application client and server over HTTP. RESTful APIs use HTTP verbs (POST, GET, PUT, DELETE, and so on) to dictate the operation to be performed (Create, Read, Update, Delete) by the server on the domain entity. The REST style has become the de facto standard for creating services in modern application development. This makes it easy to use and consume services in any technology and on any platform, such as web frontends, desktop applications, or other web services. This article is an excerpt from a book written by Tamir Dresher, Amir Zuker, and Shay Friedman titled Hands-On Full-Stack Web Development with ASP.NET Core. In this book, you will learn how to build RESTful APIs in C# with ASP.NET Core, web APIs, and Entity Framework. Overview — REST APIs with ASP.NET Core API A basic ASP.NET Core MVC application can be broken down into three layers: models, controllers, and views. RESTful APIs in ASP.NET Core work very similarly; the only difference is that, instead of returning responses as visual views, the API response is a payload of data (usually in JSON format). The data returned from the API is later consumed by clients, such as Angular-based applications that can render the data as views, or by headless clients that have no UI and simply process data. For example, consider a background process that periodically sends notifications to a user about their account status: Before ASP.NET Core, Microsoft created an explicit distinction between ASP.NET MVC and the ASP.NET Web API. The former was used to create web applications with views that are generated by the server, while the former was used to create services that contain only logic and can be consumed by any client. Over time, the distinction between the two frameworks caused duplication of code and added a burden on the developers who needed to learn and master two technologies. ASP.NET Core unified the two frameworks into the ASP.NET Core MVC suite, and made it simpler to create web applications, with or without visual responses. Let's start with a simple API that will be the foundation for our application called, say, GiveNTake application. Creating a simple API The GiveNTake application allows the user to see a catalog of available products. For this to be possible, our server needs to include a specific API method that the client application can call and get back the collection of available products. We will treat the product as a simple string in the format of [Product ID] - [Product Name]: Open the any basic project you may have created, and add a new controller class to the Controllers folder.  Right-click on the newly created Controllers folder and choose Add | Controller. In the list of available templates, choose API Controller - Empty and click Add: Set the name to ProductsController and click Add. Add the following method to the generated controller: public string[] GetProducts() { return new[] { "1 - Microwave", "2 - Washing Machine", "3 - Mirror" }; } Your controller should look like this: [Route("api/Products")] [ApiController] public class ProductsController : Controller { public string[] GetProducts() { return new[] { "1 - Microwave", "2 - Washing Machine", "3 - Mirror" }; } } Congratulations! You have completed coding your first API method. The GetProducts method returns the collection of the available products. To see it in action, build and run your project. This will open a browser with the base address of your ASP.NET application. Add the  /api/Products string to the base address in the browser's address bar and execute it. You should see the collection of strings appear on the screen like so: Inspecting your APIs using Fiddler and Postman Using your browser to execute your APIs is nice enough for simple APIs that retrieve data, but as you go along and extend your APIs, you'll soon find that you need other powerful tools to test and debug what you develop. There are many tools that let you inspect and debug your APIs, but I have chosen to teach you about Fiddler and Postman because they are both simple and powerful. Fiddler Fiddler is a free web debugging tool that works as a proxy, logging all HTTP(S) traffic that is executed by processes in your computer. Fiddler allows you to inspect the traffic to see that exact HTTP request that was sent and the exact HTTP response that was returned. You can also use other advanced features, such as setting breakpoints and overriding the data that is sent or received. To install Fiddler, navigate to https://www.telerik.com/fiddler and click the Free download button. Save and run the installer: The Fiddler main screen is built from these main parts: Sessions list: Shows the HTTP(S) requests that were sent from processes in your machine Fiddler tabs: Contains different tools for inspecting and controlling sessions Request inspector: When the Inspectors tab and inner Raw tab are selected, this section shows the request as it was sent over-the-wire Response inspector: When the Inspectors tab and inner Raw tab are selected, this section shows the response as it was sent over-the-wire Immediately after you run Fiddler, it starts collecting the HTTP sessions that are performed in your machine. If you refresh the browser that you used to navigate to the /api/Products API  you created, you should see this session in Fiddler's Sessions List, as shown in the preceding screenshot. If you run a .NET application that sends HTTP requests to an address in your localhost, you won't see the session appear in Fiddler. Changing the address to localhost.fiddler will force the request to be captured by Fiddler. Fiddler is a great tool for debugging the requests and responses that are made in your application, but it means that you need to have a client that sends those requests. Many times when debugging and experimenting with APIs, you want to create HTTP requests manually and inspect them. You can accomplish this task with Fiddler's Composer tab, but I want to teach you about another tool that is much more suitable for these scenarios—Postman. Postman Postman is an HTTP client that simplifies the testing of web services and RESTful APIs. Postman allows you to easily construct HTTP requests, send them, and inspect them. Download Postman from https://www.getpostman.com/, and then install and run it: On the introduction screen, click on the Request option: Enter GetProducts in the Request name field, and then type GiveNTake into the collection section and create a new collection. Press Save to create the new request: Enter the full URL of the GetProducts API (for example, http://localhost:5267/api/products) in the URL field: Make sure that the HTTP Verb is set to GET and click on the Send button. After a few moments, you should see the response that was received from your service, and you can now inspect the status code, response body, and headers: You will find Postman to be an indispensable development tool while you develop your APIs, and we will use it many times as we go deeper into ASP.NET Core in this book. At this point, you might be wondering, how come the GetProducts method was invoked when we navigated to /api/Products? To answer this question, we need to talk about how ASP.NET Core routes requests into controllers and actions. ASP.NET Core provides the necessary infrastructure you need to create powerful RESTful APIs. In this article, you learned how to create controllers and actions that respond to HTTP requests, and return HTTP responses that you control. We've introduced two popular tools: Fiddler and Postman, and you'll find them very useful when you create and debug your API applications. To know more about APIs in ASP.NET Core, check out the book Hands-On Full-Stack Web Development with ASP.NET Core. Google announces the general availability of a new API for Google Docs What to expect in ASP.NET Core 3.0 How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 45521

article-image-nativescript-set-up
Amey Varangaonkar
09 May 2018
9 min read
Save for later

NativeScript: What is it, and how to set it up

Amey Varangaonkar
09 May 2018
9 min read
In this tutorial, we introduce you to the NativeScript library, which allows you to create and deploy a web application on a mobile device and use it like a mobile app, rather than as a web or a hybrid application. [box type="shadow" align="" class="" width=""]The following excerpt is taken from the book TypeScript 2.x By Example written by Sachin Ohri. This book presents essential techniques to leverage the power of TypeScript 2.x to build efficient web applications.[/box] What is NativeScript? NativeScript is the open source framework for building native Android and iOS applications with web technologies. This means we can develop native mobile applications with JavaScript, TypeScript, and/or Angular. It is based on the thinking of write once and run everywhere. Applications developed with NativeScript are pure mobile apps when compared to applications developed with technologies such as PhoneGap. As they are native mobile applications, we can use all the richness of the mobile platform and provide the performance associated with that. We use native APIs and use native controls to render, which allows us to create more sophisticated applications compared to a hybrid approach. Hybrid applications do not provide the same level of flexibility or performance because they are hosted on a separate framework and do not get to interact with low-level mobile APIs directly. The best part is that it does not require us to learn a new programming language, unlike developing an iOS-based application, for which you need to know Objective C or Swift. So, we can use our existing skills to develop mobile applications. NativeScript design NativeScript is a runtime that sits on top of the native mobile operating system and uses the JavaScript Virtual Machine (JVM) V8 on Android and JavaScriptCore on iOS. Having access to these platforms allows NativeScript to expose a unified API system for developers, which is then converted into the native API at runtime. This translation between the JavaScript APIs and the native platform APIs is possible through reflection, which NativeScript uses to create its own set of interfaces. Another advantage of using JavaScript by NativeScript is its independence from specific editors. You can use any of your favorite editors to develop a NativeScript application, and you will have access to all the native APIs rather than using Xcode for iOS-based apps and Android Studio for Android-based apps. Architecture The following is a high-level diagram of NativeScript and its interaction with the mobile platform: As we can see, the runtime is responsible for converting JavaScript application code to the native platform code. It has various components that work together to convert and call the native APIs. Because NativeScript uses JVM and JavaScriptCore, it has access to all the latest ECMAScript language specifications for development, which allows us to use the latest ES6 feature set. One of the main components that we need to understand in NativeScript design is modules. Modules The NativeScript team made sure that the platform was developed in a modular fashion, much like plugins, which allow us to include only the modules that we need in our development. These modules provide us with the abstraction of native APIs and allow us to write code that work on both platforms. It has separate APIs for each logical functionality. For example, if you want to use SQLite for your storage needs, there is a package for that; if you want to use a filesystem, there is a package for that. Let's take one example to see how these modules help us write consistent code for a multiplatform environment. If you want to access a filesystem on the native platform using NativeScript, you will write code similar to what you see in the following code snippet: var filesystem = require("file-system"); new filesystem.file(path) This code is written in pure JavaScript, which first gets a reference to a file-system module, and then, using the API of the file-system module calls a file method. This code, when executed by the NativeScript runtime, first checks the platform it wants to run on and then converts the code accordingly, as shown in the following code snippets. The Android version of the code will be as follows: new java.io.file(path) The iOS version of the code will be as follows: nsFileManager.defaultManager(); fileManager.createFileAtPathContentsAttributes(path); If you have worked on any of the mobile platforms before, you will recognize this code as using the native filesystem API to access the file path. NativeScript versus web applications Until now, we have been mentioning that we can use our web technologies to write mobile applications with the help of NativeScript. So, can we write a pure web application and use the it in runtime to create a mobile application? Yes and no. Yes, we can, and we will see with our application that we can use the same code base to write with NativeScript. No, because not all components of web applications can be directly used. NativeScript allows us to use our existing JavaScript/TypeScript and CSS skills for developing the business logic and the design for our application. But because the native platforms are not web-based and do not have a DOM, we cannot use HTML as the template for our applications. Although you will see that the extension of our template files will be HTML, the element tags will be somewhat different. To give you a brief example, it does not have UI elements such as <div> or <span>, but has elements such as <StackLayout> and <DockLayout>, which allow us to arrange our UI components. Another thing to note here is that these UI elements are then converted into native elements based on the platform. So, if we use the <Button> control in NativeScript, it will get converted into android.widget.Button on the Android platform and UIButton on iOS. Setting up your NativeScript environment NativeScript provides very good documentation about installing and setting up your development environment. You can find the documentation at https://docs.nativescript.org/angular/start/quick-setup. We will briefly go through the setup process here, but recommend that you go through the documentation to understand the process. NativeScript CLI The best way to use is through the NativeScript CLI. You can install it from npm using the following command: npm install -g nativescript This command will install the NativeScript library in your global scope. To confirm that the installation has been successful, you can try running the following command from the command-line window: tns The tns command is a short form for Telerik NativeScript, and will show the array of commands associated with it. The NativeScript CLI comes with a host of commands to assist in our development, commands such as create, which helps us create a basic startup project, and deploy, which informs the NativeScript CLI to deploy the application to the device (the device can be a connected device or an emulator). You can check all the commands available with the NativeScript CLI by using the help command as follows: tns --help Installing mobile platform dependencies To build native applications, we need to install the dependencies for those mobile platforms. It is important to remember that if we want to build a NativeScript application for iOS and run it on an iOS-compatible device, we need to use macOS; for building Android applications, we can use both Windows and macOS. It provides an easy single script for Windows and macOS that takes care of the responsibility to install all the tools and framework required. The script for Windows is as shown in the following code: @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://www.nativescript.org/setup/win'))" The script for iOS is as shown in the following code: ruby -e "$(curl -fsSL https://www.nativescript.org/setup/mac)" It's important to note that these scripts require administrator-level privileges, so you may need to run them using the sudo command. It also provides a step-by-step guide to installing all these dependencies manually; details can be found at https://docs.nativescript.org/start/ns-setup-win. Once you have installed all the packages, you can check if the installation was successful by running the following command: tns doctor This command checks all the required prerequisites for building a NativeScript application, and if there are no issues identified, this command will return a success message, No issues were detected. Installing an Android Virtual Device Once you have installed all the dependencies, the next step is to install an Android emulator, which can be used for testing instead of connecting real devices. To be able to create an emulator, you need to have Android Studio on your machine. You can install Android Studio from https://developer.android.com/studio/index.html. Once you have installed Android Studio, you can check whether you have the correct Android SDK version. The NativeScript CLI needs Android SDK version 25 or higher; if you see that you do not have the required Android SDK version, then you can install it either using the following command or using the Android Studio IDE: "%ANDROID_HOME%\tools\bin\sdkmanager" "tools" "platform-tools" "platforms;android-25" "build-tools;25.0.2" "extras;android;m2repository" "extras;google;m2repository" To install the Android emulator, we use Android Studio, the details of which can be found at https://docs.nativescript.org/tooling/android-virtual-devices. On macOS, we need to make sure we have hXcodeCode installed, or else, we will not be able to run iOS-based applications. Again, you can use the tns doctor command to check if your installation was successful. And that's it! You have successfully installed and set up the NativeScript environment. Want to learn how to develop native web apps? We've got it covered. All you have to do is check out this book TypeScript 2.x By Example to create and deploy web app as a native app in a step-by-step manner. Tools in TypeScript Introducing Object Oriented Programmng with TypeScript Writing SOLID JavaScript code with TypeScript  
Read more
  • 0
  • 0
  • 45490

article-image-chatgpt-for-coding
Jakov Semenski
25 Apr 2024
6 min read
Save for later

ChatGPT for Coding

Jakov Semenski
25 Apr 2024
6 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!IntroductionChatGPT's coding style is terrible:Verbosecomplexand outdated.Let's change that.ChatGPT promised to be our coding savior, but sometimes it feels more like a blast from the past.Remember those early 2000’s coding books? Yep, it's giving those vibes.It's like having a sports car with a tractor engine. Great potential, but the performance? Not quite there.Imagine harnessing the power of ChatGPT but with the finesse of a master coder.Ready for the upgrade?Here are 12 Pro prompts that will get you the right results.Tip #1: Specificity is the kingAs soon as you ask for some coding snippet from ChatGPT, by default, you will get the most basic HelloWorld example.The more vague your prompt is, the more mediocre your results will beInstead, specify exactlylanguageversionframeworkWrite backend code for Library app that uses Rest to communicate Cover endpoints for adding, removing, and filtering books by category and date published Use Java latest version. Use lambda streams instead of for loops Use Spring framework Tip #2: Avoid code vomitChatGPT loves to write a lot of code, the way I like to call it “code vomit”We are no longer rewarded by the amount of code we produce, but by the clarity and principles we follow.Give chat GPT instructions towrite clean codeuse latest principlescover logging and exception handlingWrite clean code Code needs to be covered with Logging and proper exception handling Use principles: Kiss & DRY, SOLID Keep in mind to use design patterns where it is applicable Using coding instructions I gave you, give me code for each class Tip #3: Make it easy to use with IDEEvery time ChatGPT writes code you getexplanationsimport statementscomments.This can be good for a beginner but is not something we need for our IDEOur IDE is already good with importing all the right packages, so let ChatGPT knowWhen writing code, avoid detailed explanations, just simple bullet points Don't add import statements, as IDE will do this instead Tip #4 Write testsYour code is not complete if you are not done with tests.But not just any tests. We want to have unit and integration tests in areadable format (give when then)covering the happy and unhappy pathuses the latest testing libraries such as AssertJ and BDDMockitoFor each class write a unit and integration tests Use given when then format For libraries use BDDMockito and AssertJ Cover happy and unhappy paths Tip #5 Give REST call request examplesWhat is the app if we cannot test it without some examplesInstead of creating them manually, ask ChatGPT to create Curl examples we can easily copy to Postman.For each request, generate curl examples Now go ahead and use your terminal or copy/paste them to PostmanTip #6 Create documentationWe don’t want just plain text, instead, we need a quick start guide for developersWrite a quick start guide for developers using markdown. Imagine this app has been published to github repository Cover - Introduction - how to install app - how to run it - how to use it Tip #7 Prepare deployment script for CloudThis app cannot live just in your local environment. Instead, we need a deployment script.Depending on where you want to deploy your changes, it might beKubernetes cluster scriptGoogle-specific terraform scriptsAWS cloud formation scriptAzure-specific deployment scriptOr ask ChatGPT to suggest a deployment scriptProvide me deployment script for one of most popular cloud providers Tip #8 Version ControlOur code for now is living only locally. Let’s ask chatGPT to give us instructions on how to set up Version ControlProvide Github version control setup instructions Tip #9 Define CI/CD pipelineCI/CD or continuous integration and continuous deployment is a must-have step for any serious development.There are plenty of options to choose from, such asJenkinsGitHub actionsBambooWith CI we guarantee we cansafely merge our changes by running build and testscheck if our code changes comply with sonar policiesWith CD we guarantee that we can safely deploy our changesProvide github actions that for each open pull request we run the build and run all the tests Also automatically include sonarqube scans Also create github action to run deployment on every code merge Tip #10 Performance optimizationOur backend rest service is now running, but the question we need to ask ourselveshow fast is ithow many requests it can handlewhat is the maximum limit of requestsFor that, we need to execute performance tests, e.g. using jmeter or gatling.We need to test what is the limit of our app. Write a load test script for gatling that tests how many book searches we can execute Tip #11 Run a security auditHow can we ensure our app is secure and not open to any threats?The best way is to run security scans.Our application might be open for security threats. Which security scan tools we can use for free and how can we use them. Give me step-by-step instruction on how to use it. Tip #12 Optimize for observabilityYou have your app running somewhere in the cloud.But did you optimize it for observability?How can you easily troubleshoot issues?How can you trace requests between different services?Did you set up monitoringWe want to make sure our application is optimized for observability Create guideline and configuration for the cloud environment for Traceability - tracing request from start to finish Monitoring - monitoring key performance metrics Logging - have a centralized logging system ConclusionYou can find the full prompt herehttps://chat.openai.com/share/f0bef1ca-062d-4a22-96aa-9711615329a5ChatGPT is a tool, and like any tool, it shines when used the right way.With these prompts, you get a coding assistant that keeps up with the latest trends, ensuring your code is not just functional but also follows modern standards.Author BioJakov Semenski is an IT Architect working at IBMiX with almost 20 years of experience.He is also a ChatGPT Speaker at the WeAreDevelopers conference and shares valuable tech stories on LinkedIn.
Read more
  • 0
  • 0
  • 45453

article-image-how-netflix-migrated-from-a-monolithic-to-a-microservice-architecture-video
Savia Lobo
17 Oct 2018
3 min read
Save for later

How Netflix migrated from a monolithic to a microservice architecture [Video]

Savia Lobo
17 Oct 2018
3 min read
A microservice architecture enables organizations to carry out continuous delivery/deployment of large, complex applications to further evolve their technology stack. Netflix, a popular name for video-streaming, started off by selling and renting DVDs and gained popularity post its migration to a microservice architecture on AWS. The adoption of public cloud and a microservice architecture were the main drivers of this growth. The elasticity of the cloud allowed them to scale easily without any additional work required. Why did Netflix decide to use microservices? Way back in August 2008, Netflix had a major outage because of a major database corruption which prevented them from shipping DVDs to customers for three days. Following this, they decided to move away from a single point of failure--that could only scale vertically--and move to components that could scale horizontally and are highly available. Netflix decided to abandon their private data centers and migrate to a public cloud--AWS to be specific, which provides horizontal scalability. In order to eliminate all the existing single points of failure, they decided to re-architect their systems instead of just moving them as is to the cloud. The Simian Army A basic technique Netflix uses to make their systems more reliable and highly available is, ‘The Simian army’. These are a set of tools used to increase the resiliency of your services. The most widely used is Chaos monkey, which allows one to introduce random failures in a system to see how it reacts. Read more: Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation At Netflix, they randomly kill servers from their production fleets every once in a while and make sure there is no difference in customer experience, as the system was able to handle these failures gracefully. Chaos monkey can also be used to introduce network latency. Watch the video above by Dimos Raptis to dive deeper into Netflix’s actual transition including details about the specific techniques and methodologies they used. The video also features the lessons they learned from this transition. About Dimos Raptis Dimos Raptis is a professional Software Engineer at Alexa, Amazon with several years of experience, designing and developing software systems for various companies, ranging from small software shops to big tech companies. His technical expertise lies in the Java and Linux ecosystems; he has some hands-on experience with emergent open-source technologies. You can follow Dimos on Twitter @dimosr7. To learn more about where to use microservices and to understand the aspects you should take into account when building your architecture, head over to our course titled, ‘Microservices Architecture’. Building microservices from a monolith Java EE app [Tutorial] How Netflix uses AVA, an Image Discovery tool to find the perfect title image for each of its shows NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!  
Read more
  • 0
  • 0
  • 45321

article-image-html5-and-the-rise-of-modern-javascript-browser-apis-tutorial
Pavan Ramchandani
20 Jul 2018
15 min read
Save for later

HTML5 and the rise of modern JavaScript browser APIs [Tutorial]

Pavan Ramchandani
20 Jul 2018
15 min read
The HTMbrowserification arrived in 2008. HTML5, however, was so technologically advanced in 2008 that it was predicted that it would not be ready till at least 2022! However, that turned out to be incorrect, and here we are, with fully supported HTML5 and ES6/ES7/ES8-supported browsers. A lot of APIs used by HTML5 go hand in hand with JavaScript. Before looking at those APIs, let us understand a little about how JavaScript sees the web. This'll eventually put us in a strong position to understand various interesting, JavaScript-related things such as the Web Workers API, etc. In this article, we will introduce you to the most popular web languages HTML and JavaScript and how they came together to become the default platform for building modern front-end web applications. This is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. The HTML DOM The HTML DOM is a tree version of how the document looks. Here is a very simple example of an HTML document: <!doctype HTML> <html> <head> <title>Cool Stuff!</title> </head> <body> <p>Awesome!</p> </body> </html> Here's how its tree version will look: The previous diagram is just a rough representation of the DOM tree. HTML tags consist of head and body; furthermore, the <body> tag consists of a <p> tag, whereas the <head> tag consists of the <title> tag. Simple! JavaScript has access to the DOM directly, and can modify the connections between these nodes, add nodes, remove nodes, change contents, attach event listeners, and so on. What is the Document Object Model (DOM)? Simply put, the DOM is a way to represent HTML or XML documents as nodes. This makes it easier for other programming languages to connect to a DOM-following page and modify it accordingly. To be clear, DOM is not a programming language. DOM provides JavaScript with a way to interact with web pages. You can think of it as a standard. Every element is part of the DOM tree, which can be accessed and modified with APIs exposed to JavaScript. DOM is not restricted to being accessed only by JavaScript. It is language-independent and there are several modules available in various languages to parse DOM (just like JavaScript) including PHP, Python, Java, and so on. As said previously, DOM provides JavaScript with a way to interact with it. How? Well, accessing DOM is as easy as accessing predefined objects in JavaScript: document. The DOM API specifies what you'll find inside the document object. The document object essentially gives JavaScript access to the DOM tree formed by your HTML document. If you notice, you cannot access any element at all without actually accessing the document object first. DOM methods/properties All HTML elements are objects in JavaScript. The most commonly used object is the document object. It has the whole DOM tree attached to it. You can query for elements on that. Let's look at some very common examples of these methods: getElementById method getElementsByTagName method getElementsByClassName method querySelector method querySelectorAll method By no means is this an exhaustive list of all methods available. However, this list should at least get you started with DOM manipulation. Use MDN as your reference for various other methods. Here's the link: https://developer.mozilla.org/en-US/docs/Web/API/Document#Methods. Modern JavaScript browser APIs HTML5 brought a lot of support for some awesome APIs in JavaScript, right from the start. Although some APIs were released with HTML5 itself (such as the Canvas API), some were added later (such as the Fetch API). Let's see some of these APIs and how to use them with some code examples. Page Visibility API - is the user still on the page? The Page Visibility API allows developers to run specific code whenever the page user is on goes in focus or out of foucs. Imagine you run a game-hosting site and want to pause the game whenever the user loses focus on your tab. This is the way to go! function pageChanged() { if (document.hidden) { console.log('User is on some other tab/out of focus') // line #1 } else { console.log('Hurray! User returned') // line #2 } } document.addEventListener("visibilitychange", pageChanged); We're adding an event listener to the document; it fires whenever the page is changed. Sure, the pageChanged function gets an event object as well in the argument, but we can simply use the document.hidden property, which returns a Boolean value depending on the page's visibility at the time the code was called. You'll add your pause game code at line #1 and your resume game code at line #2. navigator.onLine API – the user's network status The navigator.onLine API tells you if the user is online or not. Imagine building a multiplayer game and you want the game to automatically pause if the user loses their internet connection. This is the way to go here! function state(e) { if(navigator.onLine) { console.log('Cool we\'re up'); } else { console.log('Uh! we\'re down!'); } } window.addEventListener('offline', state); window.addEventListener('online', state); Here, we're attaching two event listeners to window global. We want to call the state function whenever the user goes offline or online. The browser will call the state function every time the user goes offline or online. We can access it if the user is offline or online with navigator.onLine, which returns a Boolean value of true if there's an internet connection, and false if there's not. Clipboard API - programmatically manipulating the clipboard The Clipboard API finally allows developers to copy to a user's clipboard without those nasty Adobe Flash plugin hacks that were not cross-browser/cross-device-friendly. Here's how you'll copy a selection to a user's clipboard: <script> function copy2Clipboard(text) { const textarea = document.createElement('textarea'); textarea.value = text; document.body.appendChild(textarea); textarea.focus(); textarea.setSelectionRange(0, text.length); document.execCommand('copy'); document.body.removeChild(textarea); } </script> <button onclick="copy2Clipboard('Something good!')">Click me!</button> First of all, we need the user to actually click the button. Once the user clicks the button, we call a function that creates a textarea in the background using the document.createElement method. The script then sets the value of the textarea to the passed text (this is pretty good!) We then focus on that textarea and select all the contents inside it. Once the contents are selected, we execute a copy with document.execCommand('copy'); this copies the current selection in the document to the clipboard. Since, right now, the value inside the textarea is selected, it gets copied to the clipboard. Finally, we remove the textarea from the document so that it doesn't disrupt the document layout. You cannot trigger copy2Clipboard without user interaction. I mean, obviously you can, but document.execCommand('copy') will not work if the event does not come from the user (click, double-click, and so on). This is a security implementation so that a user's clipboard is not messed around with by every website that they visit. The Canvas API - the web's drawing board HTML5 finally brought in support for <canvas>, a standard way to draw graphics on the web! Canvas can be used pretty much for everything related to graphics you can think of; from digitally signing with a pen, to creating 3D games on the web (3D games require WebGL knowledge, interested? - visit http://bit.ly/webgl-101). Let's look at the basics of the Canvas API with a simple example: <canvas id="canvas" width="100" height="100"></canvas> <script> const canvas = document.getElementById("canvas"); const ctx = canvas.getContext("2d"); ctx.moveTo(0,0); ctx.lineTo(100, 100); ctx.stroke(); </script> This renders the following: How does it do this? Firstly, document.getElementById('canvas') gives us the reference to the canvas on the document. Then we get the context of the canvas. This is a way to say what I want to do with the canvas. You could put a 3D value there, of course! That is indeed the case when you're doing 3D rendering with WebGL and canvas. Once we have a reference to our context, we can do a bunch of things and add methods provided by the API out-of-the-box. Here we moved the cursor to the (0, 0) coordinates. Then we drew a line till (100,100) (which is basically a diagonal on the square canvas). Then we called stroke to actually draw that on our canvas. Easy! Canvas is a wide topic and deserves a book of its own! If you're interested in developing awesome games and apps with Canvas, I recommend you start off with MDN docs: http://bit.ly/canvas-html5. The Fetch API - promise-based HTTP requests One of the coolest async APIs introduced in browsers is the Fetch API, which is the modern replacement for the XMLHttpRequest API. Have you ever found yourself using jQuery just for simplifying AJAX requests with $.ajax? If you have, then this is surely a golden API for you, as it is natively easier to code and read! However, fetch comes natively, hence, there are performance benefits. Let's see how it works: fetch(link) .then(data => { // do something with data }) .catch(err => { // do something with error }); Awesome! So fetch uses promises! If that's the case, we can combine it with async/await to make it look completely synchronous and easy to read! <img id="img1" alt="Mozilla logo" /> <img id="img2" alt="Google logo" /> const get2Images = async () => { const image1 = await fetch('https://cdn.mdn.mozilla.net/static/img/web-docs-sprite.22a6a085cf14.svg'); const image2 = await fetch('https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png'); console.log(image1); // gives us response as an object const blob1 = await image1.blob(); const blob2 = await image2.blob(); const url1 = URL.createObjectURL(blob1); const url2 = URL.createObjectURL(blob2); document.getElementById('img1').src = url1; document.getElementById('img2').src = url2; return 'complete'; } get2Images().then(status => console.log(status)); The line console.log(image1) will print the following: You can see the image1 response provides tons of information about the request. It has an interesting field body, which is actually a ReadableStream, and a byte stream of data that can be cast to a  Binary Large Object (BLOB) in our case. A blob object represents a file-like object of immutable and raw data. After getting the Response, we convert it into a blob object so that we can actually use it as an image. Here, fetch is actually fetching us the image directly so we can serve it to the user as a blob (without hot-linking it to the main website). Thus, this could be done on the server side, and blob data could be passed down a WebSocket or something similar. Fetch API customization The Fetch API is highly customizable. You can even include your own headers in the request. Suppose you've got a site where only authenticated users with a valid token can access an image. Here's how you'll add a custom header to your request: const headers = new Headers(); headers.append("Allow-Secret-Access", "yeah-because-my-token-is-1337"); const config = { method: 'POST', headers }; const req = new Request('http://myawesomewebsite.awesometld/secretimage.jpg', config); fetch(req) .then(img => img.blob()) .then(blob => myImageTag.src = URL.createObjectURL(blob)); Here, we added a custom header to our Request and then created something called a Request object (an object that has information about our Request). The first parameter, that is, http://myawesomewebsite.awesometld/secretimage.jpg, is the URL and the second is the configuration. Here are some other configuration options: Credentials: Used to pass cookies in a Cross-Origin Resource Sharing (CORS)-enabled server on cross-domain requests. Method: Specifies request methods (GET, POST, HEAD, and so on). Headers: Headers associated with the request. Integrity: A security feature that consists of a (possibly) SHA-256 representation of the file you're requesting, in order to verify whether the request has been tampered with (data is modified) or not. Probably not a lot to worry about unless you're building something on a very large scale and not on HTTPS. Redirect: Redirect can have three values: Follow: Will follow the URL redirects Error: Will throw an error if the URL redirects Manual: Doesn't follow redirect but returns a filtered response that wraps the redirect response Referrer: the URL that appears as a referrer header in the HTTP request. Accessing and modifying history with the history API You can access a user's history to some level and modify it according to your needs using the history API. It consists of the length and state properties: console.log(history, history.length, history.state); The output is as follows: {length: 4, scrollRestoration: "auto", state: null} 4 null In your case, the length could obviously be different depending on how many pages you've visited from that particular tab. history.state can contain anything you like (we'll come to its use case soon). Before looking at some handy history methods, let us take a look at the window.onpopstate event. Handling window.onpopstate events The window.onpopstate event is fired automatically by the browser when a user navigates between history states that a developer has set. This event is important to handle when you push to history object and then later retrieve information whenever the user presses the back/forward button of the browser. Here's how we'll program a simple popstate event: window.addEventListener('popstate', e => { console.log(e.state); // state data of history (remember history.state ?) }) Now we'll discuss some methods associated with the history object. Modifying history - the history.go(distance) method history.go(x) is equivalent to the user clicking his forward button x times in the browser. However, you can specify the distance to move, that is history.go(5); . This equivalent to the user hitting the forward button in the browser five times. Similarly, you can specify negative values as well to make it move backward. Specifying 0 or no value will simply refresh the page: history.go(5); // forwards the browser 5 times history.go(-1); // similar effect of clicking back button history.go(0); // refreshes page history.go(); // refreshes page Jumping ahead - the history.forward() method This method is simply the equivalent of history.go(1). This is handy when you want to just push the user to the page he/she is coming from. One use case of this is when you can create a full-screen immersive web application and on your screen there are some minimal controls that play with the history behind the scenes: if(awesomeButtonClicked && userWantsToMoveForward()) { history.forward() } Going back - the history.back() method This method is simply the equivalent of history.go(-1). A negative number, makes the history go backwards. Again, this is just a simple (and numberless) way to go back to a page the user came from. Its application could be similar to a forward button, that is, creating a full-screen web app and providing the user with an interface to navigate by. Pushing on the history - history.pushState() This is really fun. You can change the browser URL without hitting the server with an HTTP request. If you run the following JS in your browser, your browser will change the path from whatever it is (domain.com/abc/egh) to  /i_am_awesome (domain.com/i_am_awesome) without actually navigating to any page: history.pushState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); history.pushState({page2: "Packt"}, "This is page2", "/page2_packt"); // <-- state is currently here The History API doesn't care whether the page actually exists on the server or not. It'll just replace the URL as it is instructed. The  popstate event when triggered with the browser's back/forward button, will fire the function below and we can program it like this: window.onpopstate = e => { // when this is called, state is already updated. // e.state is the new state. It is null if it is the root state. if(e.state !== null) { console.log(e.state); } else { console.log("Root state"); } } To run this code, run the onpopstate event first, then the two lines of history.pushState previously. Then press your browser's back button. You should see something like: {myName: "Mehul"} which is the information related to the parent state. Press back button one more time and you'll see the message Root State. pushState does not fire onpopstate event. Only browsers' back/forward buttons do. Pushing on the history stack - history.replaceState() The history.replaceState() method is exactly like history.pushState(), the only difference is that it replaces the current page with another, that is, if you use history.pushState() and press the back button, you'll be directed to the page you came from. However, when you use history.replaceState() and you press the back button, you are not directed to the page you came from because it is replaced with the new one on the stack. Here's an example of working with the replaceState method: history.replaceState({myName: "Mehul"}, "This is title of page", "/i_am_awesome"); This replaces (instead of pushing) the current state with the new state. Although using the History API directly in your code may not be beneficial to you right now, many frameworks and libraries such as React, under the hood, use the History API to create a seamless, reload-less, smooth experience for the end user. If you found this article useful, do check out the book Learn ECMAScript, Second Edition to learn the ECMAScript standards for designing quality web applications. What's new in ECMAScript 2018 (ES9)? 8 recipes to master Promises in ECMAScript 2018 Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 45312
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ai-unity-game-developers-emulate-real-world-senses
Kunal Chaudhari
06 Jun 2018
19 min read
Save for later

AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior

Kunal Chaudhari
06 Jun 2018
19 min read
An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player's sight, and so on. The quality of our  Non-Player Character (NPC's) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it's worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we'll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent's sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room's layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent's AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that's looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we'll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we'll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn't accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent's eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener's hearing is represented by a sphere, and the sounds closest to the listener are more likely to be "heard." We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn't necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player's health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it's another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you'll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: Let's create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. Add a plane to be used as a floor. Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you've set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine; public class Target : MonoBehaviour { public Transform targetMarker; void Start (){} void Update () { int button = 0; //Get the point of the hit position when the mouse is being clicked if(Input.GetMouseButtonDown(button)) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; targetMarker.position = targetPosition; } } } } You'll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don't do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity's Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine; public class PlayerTank : MonoBehaviour { public Transform targetTransform; public float targetDistanceTolerance = 3.0f; private float movementSpeed; private float rotationSpeed; // Use this for initialization void Start () { movementSpeed = 10.0f; rotationSpeed = 2.0f; } // Update is called once per frame void Update () { if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) { return; } Vector3 targetPosition = targetTransform.position; targetPosition.y = transform.position.y; Vector3 direction = targetPosition - transform.position; Quaternion tarRot = Quaternion.LookRotation(direction); transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } } The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let's take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That's all of the variables we need in this component. Whenever our AI character senses something, we'll check the  aspectType to see whether it's the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine; public class Aspect : MonoBehaviour { public enum AspectTypes { PLAYER, ENEMY, } public AspectTypes aspectType; } Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It'll have the following two senses: The perspective sense will check whether the tank aspect is within a set visible range and distance The touch sense will detect if the enemy aspect has collided with its box collider, which we'll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine; public class Wander : MonoBehaviour { private Vector3 targetPosition; private float movementSpeed = 5.0f; private float rotationSpeed = 2.0f; private float targetPositionTolerance = 3.0f; private float minX; private float maxX; private float minZ; private float maxZ; void Start() { minX = -45.0f; maxX = 45.0f; minZ = -45.0f; maxZ = 45.0f; //Get Wander Position GetNextPosition(); } void Update() { if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) { GetNextPosition(); } Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } void GetNextPosition() { targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ)); } } The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine; public class Sense : MonoBehaviour { public bool enableDebug = true; public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY; public float detectionRate = 1.0f; protected float elapsedTime = 0.0f; protected virtual void Initialize() { } protected virtual void UpdateSense() { } // Use this for initialization void Start () { elapsedTime = 0.0f; Initialize(); } // Update is called once per frame void Update () { UpdateSense(); } } The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we'll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine; public class Perspective : Sense { public int fieldOfView = 45; public int viewDistance = 100; private Transform playerTransform; private Vector3 rayDirection; protected override void Initialize() { playerTransform = GameObject.FindGameObjectWithTag("Player").transform; } protected override void UpdateSense() { elapsedTime += Time.deltaTime; if (elapsedTime >= detectionRate) { DetectAspect(); } } //Detect perspective field of view for the AI Character void DetectAspect() { RaycastHit hit; rayDirection = playerTransform.position - transform.position; if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView) { // Detect if player is within the field of view if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance)) { Aspect aspect = hit.collider.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Detected"); } } } } } We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI's current direction. If it's in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it's hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character's line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; } Debug.DrawLine(transform.position, playerTransform.position, Color.red); Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance); //Approximate perspective visualization Vector3 leftRayPoint = frontRayPoint; leftRayPoint.x += fieldOfView * 0.5f; Vector3 rightRayPoint = frontRayPoint; rightRayPoint.x -= fieldOfView * 0.5f; Debug.DrawLine(transform.position, frontRayPoint, Color.green); Debug.DrawLine(transform.position, leftRayPoint, Color.green); Debug.DrawLine(transform.position, rightRayPoint, Color.green); } } Touching is believing The next sense we'll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object's collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine; public class Touch : Sense { void OnTriggerEnter(Collider other) { Aspect aspect = other.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Touch Detected"); } } } } Our sample NPC and tank have  BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you're setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we'll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you'll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to explore the brand-new features in Unity 2017. How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 45267

article-image-bootstrap-grid-system-responsive-website
Savia Lobo
18 May 2018
12 min read
Save for later

How to use Bootstrap grid system for responsive website design?

Savia Lobo
18 May 2018
12 min read
Bootstrap Origins In 2011, Bootstrap was created by two Twitter employees (Mark Otto and Jacob Thornton) to address the issue of fragmentation of internal tools/platforms. Bootstrap aimed to provide consistency among different web applications that were internally developed at Twitter to reduce redundancy and increase adaptability and reusability. As digital creators, we should always aim to make our applications adaptable and reusable. This will help keep coherency between applications and speed up processes, as we won't need to create basic foundations over and over again. In today's tutorial, you will learn what a Bootstrap is, how it relates to Responsive Web Design and its importance to the web industry. When Twitter Blueprint was born, it provided a way to document and share common design patterns/assets within Twitter. This alone is an amazing feature that would make Bootstrap an extremely useful framework. With this more internal developers began contributing towards the Bootstrap project as part of Hackathon week, and the project just exploded. Not long after, it was renamed "Bootstrap" as we know and love it today, and was released as an open source project to the community. A core team led by Mark and Jacob along with a passionate and growing community of developers helped to accelerate the growth of Bootstrap. In early 2012 after a lot of contributions from the core team and the community, Bootstrap 2 was born. It had come a long way from being a framework for providing internal consistency among Twitter tools. It was now a responsive framework using a 12-column grid system. It also provided inbuilt support for Glyphicons and a plethora of other new components. In 2013, Bootstrap 3 was released with a mobile-first approach to design and a fully redesigned set of components using the immensely popular flat design. This is the version many websites use today and it is very suitable for most developers. Bootstrap 4 is the latest stable  release. This article is an excerpt taken from the book,' Responsive Web Design by Example', written by Frahaan Hussain. Why use Bootstrap? You probably have a reasonable idea of why you would use Bootstrap for developing websites after reading its history, but there is more to it. Simply put, it provides the following: A responsive grid, using the design philosophies. Cross browser compatibility, using Normalize.css to ensure elements render consistently across all browsers (which isn't a very easy task). You might be wondering why it's difficult. Simply put, there are several different browsers, each with a plethora of versions, which all render content differently. I've seen some browsers put a border around an image by default, whereas some browsers don't. This type of inconsistency will prove to be very bad for user experience. A plethora of UI components, by providing polished UI components as developers, we are going to bring our creativity to life in a much easier way. These components usually allow a team to increase their development velocity since they start from a solid tried and tested foundation. They not only provide good design, but they are usually implemented using best practices in terms of performance and accessibility. A very compact size with only a small footprint. Really fast to develop with, it doesn't get in the way like many other frameworks, but allows your creativity to shine through. Extremely easy to start using Bootstrap in your website. Bundles common JavaScript plugins such as jQuery. Excellent documentation. Customizable, allowing you to remove any unnecessary features. An amazing community that is always ready, 24/7, to help. It's pretty clear now that Bootstrap is an amazing framework and that it will help provide consistency among our projects and aid cross browser responsive design. But why use Bootstrap over other frameworks? There are endless responsive frameworks like Bootstrap out there, such as Foundation, W3.CSS, and Skeleton, to mention a few. Bootstrap, however, was one of the first responsive frameworks and is by far the most developed with an ever-growing community. It has documentation online, both official and unofficial, and other frameworks aren't able to boast about their resources as much as Bootstrap can. Constantly being updated, it makes it the right choice for any website developer. Also, most JavaScript frameworks, such as Angular and React, have bindings to Bootstrap components that will reduce the amount of code and time spent binding it with another framework. It can also be used with tools such as SASS to customize  the components provided further. Bootstrap's grid system First, let's cover what a grid system is in general, regardless of the framework you choose to develop your amazing website on top of. Without using a framework, CSS would be used to implement the grid. However, a framework like Bootstrap handles all of the CSS side and provides us with easy-to-use classes. A responsive grid system is composed of two main elements: Columns: These are the horizontal containers for storing content on a single row Rows: These are top level containers for storing columns Your website will have at least one row, but it can have more. Each row can contain containers that span a set number of columns. For example, if the grid system had 100 columns, then a container that spans 50 would be half the width of the browser and/or parent element. Basics of Bootstrap Bootstrap's grid system consists of 12 columns that can be used to display content. Bootstrap also uses containers (methods for storing the website's content), rows, and columns to aid in the layout and alignment of the web page's content. All of these employ HTML classes for usage and will be explained very shortly. The purpose of these are as follows: Columns are used to group snippets of the website's content, and they, in turn, allow manipulation without disrupting the internal content's flow. There are two different types of columns: .container: Used for a fixed width, which is set by Bootstrap .container fluid: Used for full width to span the entire browser Rows are used to horizontally group columns, which aids in lining up the site's content properly: .row: There is only one type of row Columns mentioned previously are a way of setting how wide content should be. The following are the classes used for columns: .col-xs: Designed to display the content only on extra-small screens Max container width—none Triggered when the browser width is below 576px .col-sm: Designed to display the content only on small screens Max container width—540px Triggered when the browser width is above or equal to 576px and below 768px .col-md: Designed to display the content only on medium screens Max container width—720px Triggered when the browser width is above or equal to 768 and below 992px .col-lg: Designed to display the content only on large screens Max container width—960px Triggered when the browser width is above or equal to 992px and below 1200px .col-xl: Designed to display the content only on extra-large screens Max container width—1140px Triggered when the browser width is above or equal to 1200px .col: Designed to be triggered on all screen sizes To set a column's width, we simply append an integer ranging from 1 to 12 at the end of the class, like so: .col-6: Spans six columns on all screen sizes .col-md-6: Spans six columns only on extra-small screen sizes Later in this chapter, we will run through some examples of how to use these features and how they work together. Usage and examples To use the aforementioned features, the structure is as follows: div with container class div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with row class div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content div with column class Content The following examples may have some CSS styling applied; this does not affect their usage. Equal width columns example We will start off with a simple example that consists of one row and three equal columns on all screen sizes. The following code produces the aforementioned result: You may be scratching your head in regards to the column classes, as they have no numbers appended. This is an amazing feature that will come in useful very often. It allows us, as web developers, to add columns easily, without having to update the numbers, if the width of the columns is equal. In this example, there are three columns, which means the three divs equally span their thirds of the row. Multi-row, equal-width columns example Now let's extend the previous example to multiple rows: The following code produces the aforementioned result: As you can see, by adding a new row, the columns automatically go to the next row. This is extremely useful for grouping content together. Multi-row, equal-width columns without multiple rows example The title of this example may seem confusing, but you need to read it correctly. We will now cover creating multiple rows using only a single row class. This can be achieved with the help of a display utility class called w-100. The following code produces the aforementioned result: The example shows multiple row divs are not required for multiple rows. But the result isn't exactly identical, as there is no gap between the rows. This is useful for separating content that is still similar. For example, on a social network, it is common to have posts, and each post will contain information such as its date, title, description, and so on. Each post could be its own row, but within the post, the individual pieces of information could be separated using this class. Differently sized columns Up until now, we have only created rows with equal-width columns. These are useful, but not as useful as being able to set individual sizes. As mentioned in the Bootstrap grid system section, we can easily change the column width by appending a number ranging from 1-12 at the end of the col class. The following code produces the aforementioned result: As you can see, setting the explicit width of a column is very easy, but this applies the width to all screen sizes. You may want it only to be applied on certain screen sizes. The next section will cover this. Differently sized columns with screen size restrictions Let's use the previous example and expand it to change size responsively on differently sized screens. On extra-large screens, the grid will look like the following: On all other screen sizes it will appear with equal-width columns: The following code produces the aforementioned result: Now we are beginning to use breakpoints that provide a way of creating multiple layouts with minimal extra code to make use of the available real estate fully. Mixing and matching We aren't restricted to choosing only one break-point, we are able to set breakpoints for all the available screen sizes. The following figures illustrate all screen sizes, from extra-small to extra-large: Extra-small: Small: Medium: Large: Extra-large: The following code produces the aforementioned results: It isn't necessary for all divs to have the same breakpoints or to have breakpoints at all. Vertical alignment The previous examples provide functionality for use cases, but sometimes the need may arise to align objects vertically. This could technically be done with empty divs, but this wouldn't be a very elegant solution. Instead, there are alignment classes to help with this as can be seen here: As we can see, you can align rows vertically in one of three positions. The following code produces the aforementioned result: We aren't restricted to only aligning rows, we can easily align columns relative to each other, as is demonstrated here: The following code produces the aforementioned result: Horizontal alignment As we vertically aligned content in the previous section, we will now cover how easy it is to align content horizontally. The following figures show the results of horizontal alignment:   The following code produces the aforementioned result: Column offsetting The need may arise to position content with a slight offset. If the content isn't centered or at the start or end, this can become problematic, but using column offsetting, we can overcome this issue. Simply add an offset class, with the screen size to target, and how many columns (1-12) the content should be offset by, as can be seen in the following example:   The following code produces the aforementioned result: Grid wrap up The examples covered so far will suffice for most websites. There are more techniques for manipulating the grid, which can be found on Bootstrap's website. If you tried any of the examples, you may have noticed cascading from smaller screen-size classes to larger screen-size classes. This occurs when there are no explicit classes set for a certain screen size. Bootstrap components There are plethora of amazing components that are provided with Bootstrap, thus saving time creating them from scratch. There are components for dropdowns, buttons, images, and so much more. The usage is very similar to that of the grid system, and the same HTML elements we know and love are used with CSS classes to modify and display Bootstrap constructs. I won't go over every component that Bootstrap offers as that would require an encyclopedia in itself, and many of the commonly used ones will be covered in future chapters through example projects. I would however recommend taking a look at some of the components on Bootstrap's website. If you have found this post useful, do check out this book, ' Responsive Web Design by Example' to build engaging responsive websites using frameworks like Bootstrap and upgrade your skills as a web designer. Get ready for Bootstrap v4.1; Web developers to strap up their boots Web Development with React and Bootstrap Bootstrap 4 Objects, Components, Flexbox, and Layout  
Read more
  • 0
  • 2
  • 45262

article-image-customizing-player-character
Packt
13 Oct 2016
18 min read
Save for later

Customizing the Player Character

Packt
13 Oct 2016
18 min read
One of the key features of an RPG is to be able to customize your character player. In this article by Vahe Karamian, author of the book Building an RPG with Unity 5.x, we will take a look at how we can provide a means to achieve this. (For more resources related to this topic, see here.) Once again, the approach and concept are universal, but the actual implementation might be a little different based on your model structure. Create a new scene and name it Character Customization. Create a Cube prefab and set it to the origin. Change theScaleof the cube to<5, 0.1, 5>, you can also change the name of the GameObject to Base. This will be the platform that our character model stands on while the player customizes his/her character before game play. Drag and drop thefbxfile representing your character model into theScene View. The next few steps will entirely depend on your model hierarchy and structure as designed by the modeller. To illustrate the point, I have placed the same model in the scene twice. The one on the left is the model that has been configured to display only the basics, the model on the right is the model in its original state as shown in the figure below: Notice that this particular model I am using has everything attached. These include the different types of weapons, shoes, helmets, and armour. The instantiated prefab on the left hand side has turned off all of the extras from the GameObject's hierarchy. Here is how the hierarchy looks in the Hierarchy View: The model has a veryextensivehierarchy in its structure, the figure above is a small snippet to demonstrate that you will need to navigate the structure and manually identify and enable or disable the mesh representing a particular part of the model. Customizable Parts Based on my model, I cancustomizea few things on my 3D model. I can customize the shoulder pads, I can customize the body type, I can customize the weapons and armor it has, I can customize the helmet and shoes, and finally I can also customize the skin texture to give it different looks. Let's get a listing of all the different customizable items we have for our character: Shoulder Shields:there are four types Body Type:there are three body types; skinny, buff, and chubby Armor:knee pad, leg plate Shields:there are two types of shields Boots:there are two types of boots Helmet:there are four types of helmets Weapons:there are 13 different types of weapons Skins:there are 13 different types of skins User Interface Now that we know what ouroptionsare for customizing our player character, we can start thinking about the User Interface (UI) that will be used to enable the customization of the character. To design our UI, we will need to create aCanvasGameObject, this is done by right-clicking in theHierarchy Viewand selectingCreate|UI|Canvas. This will place aCanvasGameObject and anEventSystemGameObject in theHierarchy View. It is assumed that you already know how to create a UI in Unity. I am going to use a Panel togroupthe customizable items. For the moment I will be using checkboxes for some items and scroll bars for the weapons and skin texture. The following figure will illustrate how my UI for customization looks: These UI elements will need to be integrated with Event Handlers that will perform the necessary actions for enabling or disabling certain parts of the character model. For instance, using the UI I can select Shoulder Pad 4, Buff Body Type, move the scroll bar until the Hammer weapon shows up, selecting the second Helmet checkbox, selecting Shield 1 and Boot 2, my character will look like the figure below.We need a way to refer to each one of the meshes representing the different types of customizable objects on the model. This will be done through a C# script. The script will need to keep track of all the parts we are going to be managing for customization. Some models will not have the extra meshes attached. You can always create empty GameObjects at a particular location on the model, and you can dynamically instantiate the prefab representing your custom object at the given point. This can also be done for our current model, for instance, if we have a special space weapon that somehow gets dropped by the aliens in the game world, we can attach the weapon to our model through C# code. The important thing is to understand the concept, and the rest is up to you! The Code for Character Customization Things don't happen automatically. So we need to create some C# code that will handle the customization of our character model. The script we create here will handle the UI events that will drive the enabling and disabling of different parts of the model mesh. Create a new C# script and call itCharacterCustomization.cs. This script will be attached to theBaseGameObject in the scene. Here is a listing of the script: using UnityEngine; using UnityEngine.UI; using System.Collections; using UnityEngine.SceneManagement; public class CharacterCustomization : MonoBehaviour { public GameObject PLAYER_CHARACTER; public Material[] PLAYER_SKIN; public GameObject CLOTH_01LOD0; public GameObject CLOTH_01LOD0_SKIN; public GameObject CLOTH_02LOD0; public GameObject CLOTH_02LOD0_SKIN; public GameObject CLOTH_03LOD0; public GameObject CLOTH_03LOD0_SKIN; public GameObject CLOTH_03LOD0_FAT; public GameObject BELT_LOD0; public GameObject SKN_LOD0; public GameObject FAT_LOD0; public GameObject RGL_LOD0; public GameObject HAIR_LOD0; public GameObject BOW_LOD0; // Head Equipment public GameObject GLADIATOR_01LOD0; public GameObject HELMET_01LOD0; public GameObject HELMET_02LOD0; public GameObject HELMET_03LOD0; public GameObject HELMET_04LOD0; // Shoulder Pad - Right Arm / Left Arm public GameObject SHOULDER_PAD_R_01LOD0; public GameObject SHOULDER_PAD_R_02LOD0; public GameObject SHOULDER_PAD_R_03LOD0; public GameObject SHOULDER_PAD_R_04LOD0; public GameObject SHOULDER_PAD_L_01LOD0; public GameObject SHOULDER_PAD_L_02LOD0; public GameObject SHOULDER_PAD_L_03LOD0; public GameObject SHOULDER_PAD_L_04LOD0; // Fore Arm - Right / Left Plates public GameObject ARM_PLATE_R_1LOD0; public GameObject ARM_PLATE_R_2LOD0; public GameObject ARM_PLATE_L_1LOD0; public GameObject ARM_PLATE_L_2LOD0; // Player Character Weapons public GameObject AXE_01LOD0; public GameObject AXE_02LOD0; public GameObject CLUB_01LOD0; public GameObject CLUB_02LOD0; public GameObject FALCHION_LOD0; public GameObject GLADIUS_LOD0; public GameObject MACE_LOD0; public GameObject MAUL_LOD0; public GameObject SCIMITAR_LOD0; public GameObject SPEAR_LOD0; public GameObject SWORD_BASTARD_LOD0; public GameObject SWORD_BOARD_01LOD0; public GameObject SWORD_SHORT_LOD0; // Player Character Defense Weapons public GameObject SHIELD_01LOD0; public GameObject SHIELD_02LOD0; public GameObject QUIVER_LOD0; public GameObject BOW_01_LOD0; // Player Character Calf - Right / Left public GameObject KNEE_PAD_R_LOD0; public GameObject LEG_PLATE_R_LOD0; public GameObject KNEE_PAD_L_LOD0; public GameObject LEG_PLATE_L_LOD0; public GameObject BOOT_01LOD0; public GameObject BOOT_02LOD0; // Use this for initialization void Start() { } public bool ROTATE_MODEL = false; // Update is called once per frame void Update() { if (Input.GetKeyUp(KeyCode.R)) { this.ROTATE_MODEL = !this.ROTATE_MODEL; } if (this.ROTATE_MODEL) { this.PLAYER_CHARACTER.transform.Rotate(new Vector3(0, 1, 0), 33.0f * Time.deltaTime); } if (Input.GetKeyUp(KeyCode.L)) { Debug.Log(PlayerPrefs.GetString("NAME")); } } public void SetShoulderPad(Toggle id) { switch (id.name) { case "SP-01": { this.SHOULDER_PAD_R_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 1); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-02": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 1); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-03": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_R_04LOD0.SetActive(false); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_04LOD0.SetActive(false); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 1); PlayerPrefs.SetInt("SP-04", 0); break; } case "SP-04": { this.SHOULDER_PAD_R_01LOD0.SetActive(false); this.SHOULDER_PAD_R_02LOD0.SetActive(false); this.SHOULDER_PAD_R_03LOD0.SetActive(false); this.SHOULDER_PAD_R_04LOD0.SetActive(id.isOn); this.SHOULDER_PAD_L_01LOD0.SetActive(false); this.SHOULDER_PAD_L_02LOD0.SetActive(false); this.SHOULDER_PAD_L_03LOD0.SetActive(false); this.SHOULDER_PAD_L_04LOD0.SetActive(id.isOn); PlayerPrefs.SetInt("SP-01", 0); PlayerPrefs.SetInt("SP-02", 0); PlayerPrefs.SetInt("SP-03", 0); PlayerPrefs.SetInt("SP-04", 1); break; } } } public void SetBodyType(Toggle id) { switch (id.name) { case "BT-01": { this.RGL_LOD0.SetActive(id.isOn); this.FAT_LOD0.SetActive(false); break; } case "BT-02": { this.RGL_LOD0.SetActive(false); this.FAT_LOD0.SetActive(id.isOn); break; } } } public void SetKneePad(Toggle id) { this.KNEE_PAD_R_LOD0.SetActive(id.isOn); this.KNEE_PAD_L_LOD0.SetActive(id.isOn); } public void SetLegPlate(Toggle id) { this.LEG_PLATE_R_LOD0.SetActive(id.isOn); this.LEG_PLATE_L_LOD0.SetActive(id.isOn); } public void SetWeaponType(Slider id) { switch (System.Convert.ToInt32(id.value)) { case 0: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 1: { this.AXE_01LOD0.SetActive(true); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 2: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(true); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 3: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(true); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 4: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(true); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 5: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(true); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 6: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(true); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 7: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(true); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 8: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(true); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 9: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(true); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 10: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(true); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 11: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(true); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 12: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(true); this.SWORD_SHORT_LOD0.SetActive(false); break; } case 13: { this.AXE_01LOD0.SetActive(false); this.AXE_02LOD0.SetActive(false); this.CLUB_01LOD0.SetActive(false); this.CLUB_02LOD0.SetActive(false); this.FALCHION_LOD0.SetActive(false); this.GLADIUS_LOD0.SetActive(false); this.MACE_LOD0.SetActive(false); this.MAUL_LOD0.SetActive(false); this.SCIMITAR_LOD0.SetActive(false); this.SPEAR_LOD0.SetActive(false); this.SWORD_BASTARD_LOD0.SetActive(false); this.SWORD_BOARD_01LOD0.SetActive(false); this.SWORD_SHORT_LOD0.SetActive(true); break; } } } public void SetHelmetType(Toggle id) { switch (id.name) { case "HL-01": { this.HELMET_01LOD0.SetActive(id.isOn); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-02": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(id.isOn); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(false); break; } case "HL-03": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(id.isOn); this.HELMET_04LOD0.SetActive(false); break; } case "HL-04": { this.HELMET_01LOD0.SetActive(false); this.HELMET_02LOD0.SetActive(false); this.HELMET_03LOD0.SetActive(false); this.HELMET_04LOD0.SetActive(id.isOn); break; } } } public void SetShieldType(Toggle id) { switch (id.name) { case "SL-01": { this.SHIELD_01LOD0.SetActive(id.isOn); this.SHIELD_02LOD0.SetActive(false); break; } case "SL-02": { this.SHIELD_01LOD0.SetActive(false); this.SHIELD_02LOD0.SetActive(id.isOn); break; } } } public void SetSkinType(Slider id) { this.SKN_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.FAT_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; this.RGL_LOD0.GetComponent<Renderer>().material = this.PLAYER_SKIN[System.Convert.ToInt32(id.value)]; } public void SetBootType(Toggle id) { switch (id.name) { case "BT-01": { this.BOOT_01LOD0.SetActive(id.isOn); this.BOOT_02LOD0.SetActive(false); break; } case "BT-02": { this.BOOT_01LOD0.SetActive(false); this.BOOT_02LOD0.SetActive(id.isOn); break; } } } } This is a long script but it is straightforward. At the top of the script we have defined all of the variables that will be referencing the different meshes in our model character. All variables are of type GameObject with the exception of thePLAYER_SKINvariable which is an array ofMaterialdata type. The array is used to store the different types of texture created for the character model. There are a few functions defined that are called by the UI event handler. These functions are:SetShoulderPad(Toggle id); SetBodyType(Toggle id); SetKneePad(Toggle id); SetLegPlate(Toggle id); SetWeaponType(Slider id); SetHelmetType(Toggle id); SetShieldType(Toggle id); SetSkinType(Slider id);All of the functions take a parameter that identifies which specific type is should enable or disable. A BIG NOTE HERE! You can also use the system we just built to create all of the different variations of your Non-Character Player models and store them as prefabs! Wow! This will save you so much time and effort in creating your characters representing different barbarians!!! Preserving Our Character State Now that we have spent the time to customize our character, we need to preserve our character and use it in our game. In Unity, there is a function calledDontDestroyOnLoad(). This is a great function that can be utilized at this time. What does it do? It keeps the specified GameObject in memory going from one scene to the next. We can use these mechanisms for now, eventually though, you will want to create a system that can save and load your user data. Go ahead and create a new C# script and call itDoNotDestroy.cs. This script is going to be very simple. Here is the listing: using UnityEngine; using System.Collections; public class DoNotDestroy : MonoBehaviour { // Use this for initialization void Start() { DontDestroyOnLoad(this); } // Update is called once per frame void Update() { } } After you create the script go ahead and attach it to your character model prefab in the scene. Not bad, let's do a quick recap of what we have done so far. Recap By now you should have three scenes that are functional. We have our scene that represents the main menu, we have our scene that represents our initial level, and we just created a scene that is used for character customization. Here is the flow of our game thus far: We start the game, see the main menu, select theStart Gamebutton to enter the character customization scene, do our customization, and when we click theSavebutton we loadlevel 1. For this to work, we have created the following C# scripts: GameMaster.cs:used as the main script to keep track of our game state CharacterCustomization.cs:used exclusively for customizing our character DoNotDestroy.cs:used to save the state of a given object CharacterController.cs:used to control the motion of our character IKHandle.cs:used to implement inverse kinematics for the foot When you combine all of this together you now have a good framework and flow that can be extended and improved as we go along. Summary We covered some very important topics and concepts in the article that can be used and enhanced for your games. We started the article by looking into how to customize your player character. The concepts you take away from the article can be applied to a wide variety of scenarios. We look at how to understand the structure of your character model so that you can better determine the customization methods. These are the different types of weapons, clothing, armour, shields and so on... We then looked at how to create a user interface to help enable us with the customization of our player character during gameplay. We also learned that the tool we developed can be used to quickly create several different character models (customized) and store them as Prefabs for later use! Great time saver!!! We also learned how to preserve the state of our player character after customization for gameplay. You should now have an idea of how to approach your project. Resources for Article: Further resources on this subject: Animations Sprites [article] Development Tricks with Unreal Engine 4 [article] The Game World [article]
Read more
  • 0
  • 0
  • 44983

article-image-top-5-free-business-intelligence-tools
Amey Varangaonkar
02 Apr 2018
7 min read
Save for later

Top 5 free Business Intelligence tools

Amey Varangaonkar
02 Apr 2018
7 min read
There is no shortage of business intelligence tools available to modern businesses today. But they're not always easy on the pocket. Great functionality, stylish UI and ease of use always comes with a price tag. If you can afford it, great - if not, it's time to start thinking about open source and free business intelligence tools.  Free business intelligence tools can power your business Take a look at 5 of the best free or open source business intelligence tools. They're all as effective and powerful as anything you'd pay a premium for. You simply need to know what you're doing with them. BIRT BIRT (Business Intelligence and Reporting Tools) is an open-source project that offers industry-standard reporting and BI capabilities. It's available as both a desktop and web application. As a top-level project within the umbrella of the Eclipse Foundation, it's got a good pedigree that means you can be confident in its potency. BIRT is especially useful for businesses which have a working environment built around Java and Java EE, as its reporting and charting engines can integrate seamlessly with Java. From creating a range of reports to different types of charts and graphs, BIRT can also be used for advanced analytical tasks. You can learn about the impressive reporting capabilities that BIRT offers on its official features page. Pros: The BIRT platform is one of the most popularly used open source business intelligence tools across the world, with more than 12 million downloads and 2.5 million users across more than 150 countries. With a large community of users, getting started with this tool, or getting solutions to problems that you might come across should be easy. Cons: Some programming experience, preferably in Java, is required to make the best use of this tool. The complex functions and features may not be easy to grasp for absolute beginners. Jaspersoft Community Jaspersoft, formerly known as Panscopic, is one of the leading open source suites of tools for a variety of reporting and business intelligence tasks. It was acquired by TIBCO in 2014 in a deal worth approximately $185 million, and has grown in popularity ever since. Jaspersoft began with the promise of “saving the world from the oppression of complex, heavyweight business intelligence”, and the Community edition offers the following set of tools for easier reporting and analytics: JasperReports Server: This tool is used for designing standalone or embeddable reports which can be used across third party applications JasperReports Library: You can design pixel-perfect reports from different kinds of datasets Jaspersoft ETL: This is a popular warehousing tool powered by Talend for extracting useful insights from a variety of data sources Jaspersoft Studio: Eclipse-based report designer for JasperReports and JasperReports Server Visualize.js: A JavaScript-based framework to embed Jaspersoft applications Pros: Jaspersoft, like BIRT, has a large community of developers looking to actively solve any problem they might come across. More often than not, your queries are bound to be answered satisfactorily. Cons: Absolute beginners might struggle with the variety of offerings and their applications. The suite of Jaspersoft tools is more suited for someone with an intermediate programming experience. KNIME KNIME is a free, open-source data analytics and business intelligence company that offers a robust platform for reporting and data integration. Used commonly by data scientists and analysts, KNIME offers features for data mining, machine learning and data visualization in order to build effective end-to-end data pipelines. There are 2 major product offerings from KNIME: KNIME Analytics Platform KNIME Cloud Analytics Platform Considered to be one of the most established players in the Analytics and business intelligence market, KNIME has customers in over 60 countries worldwide. You can often find KNIME featured as a ‘Leader’ in the Gartner Magic Quadrant. It finds applications in a variety of enterprise use-cases, including pharma, CRM, finance, and more. Pros: If you want to leverage the power of predictive analytics and machine learning, KNIME offers you just the perfect environment to build industry-standard, accurate models. You can create a wide variety of visualizations including complex plots and charts, and perform complex ETL tasks with relative ease. Cons: KNIME is not suited for beginners. It's built instead for established professionals such as data scientists and analysts who want to conduct analyses quickly and efficiently. Tableau Public Tableau Public’s promise is simple - “Visualize and share your data in minutes - for free”. Tableau is one of the most popular business intelligence tools out there, rivalling the likes of Qlik, Spotfire, Power BI among others. Along with its enterprise edition which offers premium analytics, reporting and dashboarding features, Tableau also offers a freely available Public version for effective visual analytics. Last year, Tableau released an announcement that the interactive stories and reports published on the Tableau Public platform had received more than 1 billion views worldwide. Leading news organizations around the world, including BBC and CNBC, use Tableau Public for data visualization. Pros: Tableau Public is a very popular tool with a very large community of users. If you find yourself struggling to understand or execute any feature on this platform, there are ample number of solutions available on the community forums and also on forums such as Stack Overflow. The quality of visualizations is industry-standard, and you can publish them anywhere on the web without any hassle. Cons:It’s quite difficult to think of any drawback of using Tableau Public, to be honest. Having limited features as compared to the enterprise edition of Tableau is obviously a shortcoming, though. [box type="info" align="" class="" width=""]Editor’s tip: If you want to get started with Tableau Public and create interesting data stories using it, Creating Data Stories with Tableau Public is one book you do not want to miss out on![/box] Microsoft Power BI Microsoft Power BI is a paid, enterprise-ready offering by Microsoft to empower businesses to find intuitive data insights across a variety of data formats. Microsoft also offers a stripped-down version of Power BI with limited Business Intelligence capabilities called as Power BI Desktop. In this free version, users are offered up to 1 GB of data to work on, and the ability to create different kinds of visualizations on CSV data as well as Excel spreadsheets. The reports and visualizations built using Power BI Desktop can be viewed on mobile devices as well as on browsers, and can be updated on the go. Pros: Free, very easy to use. Power BI Desktop allows you to create intuitive visualizations and reports. For beginners looking to learn the basics of Business Intelligence and data visualization, this is a great tool to use. You can also work with any kind of data and connect it to the Power BI Desktop effortlessly. Cons: You don’t get the full suite of features on Power BI Desktop which make Power BI such an elegant and wonderful Business Intelligence tool. Also, new reports and dashboards cannot be created via the mobile platform. [box type="info" align="" class="" width=""]Editor’s Tip: If you want to get started with Microsoft Power BI, or want handy tips on using Power BI effectively, our Microsoft Power BI Cookbook will prove to be of great use! [/box] There are a few other free and open source tools which are quite effective and find a honorary mention in this article. We were absolutely spoilt for choices, and choosing the top 5 tools list among all these options was a lot of hard work! Some other tools which deserve a honorary mention are - Dataiku Free Edition, Pentaho Community Edition, QlikView Personal Edition, Rapidminer, among others. You may want to check them out as well. What do you think about this list? Are there any other free/open source business intelligence tools which should’ve made it into list?
Read more
  • 0
  • 0
  • 44955
article-image-create-your-first-openai-gym-environment-tutorial
Savia Lobo
19 Sep 2018
7 min read
Save for later

Create your first OpenAI Gym environment [Tutorial]

Savia Lobo
19 Sep 2018
7 min read
OpenAI Gym is an open source toolkit that provides a diverse collection of tasks, called environments, with a common interface for developing and testing your intelligent agent algorithms. The toolkit introduces a standard Application Programming Interface (API) for interfacing with environments designed for reinforcement learning. Each environment has a version attached to it, which ensures meaningful comparisons and reproducible results with the evolving algorithms and the environments themselves. This article is an excerpt taken from the book, Hands-On Intelligent Agents with OpenAI Gym, written by Praveen Palanisamy. In this article, you will get to know what OpenAI Gym is, its features, and later create your own OpenAI Gym environment. The Gym toolkit, through its various environments, provides an episodic setting for reinforcement learning, where an agent's experience is broken down into a series of episodes. In each episode, the initial state of the agent is randomly sampled from a distribution, and the interaction between the agent and the environment proceeds until the environment reaches a terminal state. Do not worry if you are not familiar with reinforcement learning. Some of the basic environments available in the OpenAI Gym library are shown in the following screenshot: Examples of basic environments available in the OpenAI Gym with a short description of the task The OpenAI Gym natively has about 797 environments spread over different categories of tasks. The famous Atari category has the largest share with about 116 (half with screen inputs and half with RAM inputs) environments! The categories of tasks/environments supported by the toolkit are listed here: Algorithmic Atari Board games Box2D Classic control Doom (unofficial) Minecraft (unofficial) MuJoCo Soccer Toy text Robotics (newly added) The various types of environment (or tasks) available under the different categories, along with a brief description of each environment, is given next. Keep in mind that you may need some additional tools and packages installed on your system to run environments in each of these categories. To have a detailed overview of each of these categories, head over to the book. With that, you have a very good overview of all the different categories and types of environment that are available as part of the OpenAI Gym toolkit. It is worth noting that the release of the OpenAI Gym toolkit was accompanied by an OpenAI Gym website (gym.openai.com), which maintained a scoreboard for every algorithm that was submitted for evaluation. It showcased the performance of user-submitted algorithms, and some submissions were also accompanied by detailed explanations and source code. Unfortunately, OpenAI decided to withdraw support for the evaluation website. The service went offline in September 2017. Now you have a good picture of the various categories of environment available in OpenAI Gym and what each category provides you with. Next, we will look at the key features of OpenAI Gym that make it an indispensable component in many of today's advancements in intelligent agent development, especially those that use reinforcement learning or deep reinforcement learning. Understanding the features of OpenAI Gym Here, we will take a look at the key features that have made the OpenAI Gym toolkit very popular in the reinforcement learning community and led to it becoming widely adopted. Simple environment interface OpenAI Gym provides a simple and common Python interface to environments. Specifically, it takes an action as input and provides observation, reward, done and an optional info object, based on the action as the output at each step. If this does not make perfect sense to you yet, do not worry. We will go over the interface again in a more detailed manner to help you understand. This paragraph is just to give you an overview of the interface to make it clear how simple it is. This provides great flexibility for users as they can design and develop their agent algorithms based on any paradigm they like, and not be constrained to use any particular paradigm because of this simple and convenient interface. Comparability and reproducibility We intuitively feel that we should be able to compare the performance of an agent or an algorithm in a particular task to the performance of another agent or algorithm in the same task. For example, if an agent gets a score of 1,000 on average in the Atari game of Space Invaders, we should be able to tell that this agent is performing worse than an agent that scores 5000 on average in the Space Invaders game in the same amount of training time. But what happens if the scoring system for the game is slightly changed? Or if the environment interface was modified to include additional information about the game states that will provide an advantage to the second agent? This would make the score-to-score comparison unfair, right? To handle such changes in the environment, OpenAI Gym uses strict versioning for environments. The toolkit guarantees that if there is any change to an environment, it will be accompanied by a different version number. Therefore, if the original version of the Atari Space Invaders game environment was named SpaceInvaders-v0 and there were some changes made to the environment to provide more information about the game states, then the environment's name would be changed to SpaceInvaders-v1. This simple versioning system makes sure we are always comparing performance measured on the exact same environment setup. This way, the results obtained are comparable and reproducible. Ability to monitor progress All the environments available as part of the Gym toolkit are equipped with a monitor. This monitor logs every time step of the simulation and every reset of the environment. What this means is that the environment automatically keeps track of how our agent is learning and adapting with every step. You can even configure the monitor to automatically record videos of the game while your agent is learning to play. Creating your first OpenAI Gym environment This section provides a quick way to get started with the OpenAI Gym Python API on Linux and macOS using virtualenv so that you can get a sneak peak into the Gym! MacOS and Ubuntu Linux systems come with Python installed by default. You can check which version of Python is installed by running python --version from a terminal window. If this returns python followed by a version number, then you are good to proceed to the next steps! If you get an error saying the Python command was not found, then you have to install Python. Install virtualenv: $pip install virtualenv If pip is not installed on your system, you can install it by typing sudo easy_install pip. Create a virtual environment named openai-gym using the virtualenv tool: $virtualenv openai-gym Activate the openai-gym virtual environment: $source openai-gym/bin/activate Install all the packages for the Gym toolkit from upstream: $pip install -U gym If you get permission denied or failed with error code 1 when you run the pip install command, it is most likely because the permissions on the directory you are trying to install the package to (the openai-gym directory inside virtualenv in this case) needs special/root privileges. You can either run sudo -H pip install -U gym[all] to solve the issue or change permissions on the openai-gym directory by running sudo chmod -R o+rw ~/openai-gym. Test to make sure the installation is successful: $python -c 'import gym; gym.make("CartPole-v0");' Creating and visualizing a new Gym environment In just a minute or two, you have created an instance of an OpenAI Gym environment to get started! Let's open a new Python prompt and import the gym module: >>import gym Once the gym module is imported, we can use the gym.make method to create our new environment like this: >>env = gym.make('CartPole-v0') >>env.reset() env.render() This will bring up a window like this: Hooray! Summary In this post, you learned what OpenAI Gym is, its features, and created your first OpenAI Gym environment. You now have a very good idea about OpenAI Gym. If you've enjoyed this post, head over to the book, Hands-On Intelligent Agents with OpenAI Gym, to know about other latest learning environments and learning algorithms. Extending OpenAI Gym environments with Wrappers and Monitors [Tutorial] How to build a cartpole game using OpenAI Gym Top 5 tools for reinforcement learningTop 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 44873

article-image-google-employees-walkout-for-real-change-today-these-are-their-demands
Natasha Mathur
01 Nov 2018
5 min read
Save for later

Google employees ‘Walkout for Real Change’ today. These are their demands.

Natasha Mathur
01 Nov 2018
5 min read
More than 1500 Google employees, around the world, are planning to walk out of their respective Google offices today, to protest against Google’s handling of sexual misconduct within the workplace, according to the New York Times. This is a part of the “women’s walkout” that was organized by more than 200 Google engineers, earlier this week as a response to Google’s handling of sexual misconduct in the recent past, that employees found as inadequate. The planning for the walkout was done last Friday, where Claire Stapleton, product marketing manager at Google’s YouTube created an internal mailing list to organize the walkout according to the New York Times. As the walkout was organized, more than 200 employees had joined in over the weekend, which has since grown to more than 1,500. The organizers took to Twitter, yesterday, to lay out five demands for change within the workplace. The protest has already started at Google’s Tokyo and Singapore office. Google employees and contractors, across the globe, will be leaving work at 11:10 AM in their respective time zones.   Here are some glimpses from the walkout: https://twitter.com/GoogleWalkout/status/1058199862502612993 https://twitter.com/EmmaThomson2/status/1058180157804994562 https://twitter.com/GoogleWalkout/status/1058018104930897920 https://twitter.com/GoogleWalkout/status/1058010748444700672 https://twitter.com/GoogleWalkout/status/1058003099581853697 The demands laid out by the Google employees are as follows: An end to Forced Arbitration in cases of harassment and discrimination for all current and future employees. This means that Google should no longer require people to waive their right to sue. In fact, every co-worker should be given the right to bring a co-worker, representative, or supporter of their choice when meeting with HR for filing a harassment claim. A commitment to end pay and opportunity inequity. This includes making sure that there are women of color at all the levels of the organization. There should also be transparent data on the gender, race, and ethnicity compensation gap, across both level and years of industry experience.  The methods and techniques that have been used to aggregate such data should also be transparent. A publicly disclosed sexual harassment transparency report. This includes the number of harassment claims at Google over time, types of claims submitted, how many victims and accused have left Google, details about exit packages and their worth. A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously. This is because the current process in place is not working. HR’s performance is assessed by senior management and directors, which forces them to put the management’s interest ahead of the employees that report harassment and discrimination. Accountability, safety, and ability to report regarding unsafe working conditions should not be dictated by the employment status. Elevate the Chief Diversity Officer to answer directly to the CEO and make recommendations directly to the Board of Directors. Appoint an Employee Rep to the Board. The frustration among the Google employees surfaced after the New York Times report brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. As per the report, Rubin was accused of misbehavior in 2014 and the allegations were confirmed by Google. Due to this, he was asked to leave by former Google CEO, Mr.Page, but what’s discreditable is the fact that Google paid him $90 million as an exit package. Moreover,  he also received a high profile well-respected farewell by Google in October 2014. Also, the fact that senior executives such as Drummond, Chief Legal Officer, Alphabet, who were mentioned in the NY times report for indulging in “inappropriate relationships” within the organization continues to work in highly placed positions at Google and haven’t faced any real punitive action by Google for their past behavior. “We don’t want to feel that we’re unequal or we’re not respected anymore. Google’s famous for its culture. But in reality, we’re not even meeting the basics of respect, justice, and fairness for every single person here”, Stapleton told the NY Times. Google CEO Sundar Pichai had sent an email to all the Google employees, last Thursday, clarifying that the company has fired 48 people over the last two years for sexual harassment, out of whom, 13  were “senior managers and above”. He also mentioned how none of them received any exit packages. Sundar Pichai, Google’s CEO, further apologized in an email obtained by Axios this Tuesday, saying that the “apology at TGIF didn’t come through, and it wasn’t enough”. Pichai also mentioned that he supports the engineers at Google who have organized a “walkout”. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. The very same day, news of Richard DeVaul, a director at unit X of Alphabet (Google’s parent company) whose name was also mentioned in the New York Times report, resigning from the company came to light. DeVaul had been accused of sexually harassing Star Simpson, a hardware engineer. DeVaul did not receive any exit package on his resignation. Public response to the walkout has been largely positive: https://twitter.com/lizthegrey/status/1057859226100355072 https://twitter.com/amrtgaber/status/1057822987527761920 https://twitter.com/sparker2/status/1057846019122069508 https://twitter.com/LisaIronTongue/status/1057852658948595712 Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 44872

article-image-deploying-ar-experiences-onto-mobile-devices
Anna Braun, Raffael Rizzo
21 Oct 2024
5 min read
Save for later

Deploying AR Experiences onto Mobile Devices

Anna Braun, Raffael Rizzo
21 Oct 2024
5 min read
This article is an excerpt from the book, XR Development with Unity, by Anna Braun, Raffael Rizzo. This practical guide helps you create immersive VR, AR, and MR experiences using Unity 2021.3 or later versions. You’ll learn to add physics, animations, teleportation, sound, effects, and hand-tracking to XR scenes and deploy them on VR headsets, simulators, and mobile devices—all that you need to create interactive XR projects in Unity is here.Introduction In this article, you will learn to launch your AR experiences onto smartphones or tablets. Primarily, you have two paths to accomplish this: deployment onto an Android or an iOS device. For solo projects, where the AR application is meant for personal use, you may opt to deploy onto just Android or iOS, depending on your device’s operating system. However, for larger-scale projects that involve several users – be it academic, industrial, or any other group, irrespective of its size – it’s advisable to deploy and test the AR app on both Android and iOS platforms. This strategy has multiple benefits. First, if your application gains momentum or its usage expands, it would already be compatible with both major platforms, eliminating the need for time-consuming porting later on. Second, making your app accessible on both platforms from the outset can draw in more users, and possibly attract increased funding or support. Another key advantage to this is the cross-platform compatibility offered by Unity. This enables you to maintain a singular code base for both platforms, simplifying the management and updating process for your application. Any modifications made need to be done in one location and then deployed across both platforms. In the next section, we’ll delve into the steps required to deploy your AR scene onto an Android device. Deploying onto Android This section outlines the procedure to deploy your AR scene onto an Android device. The initial part of the process involves enabling some settings on your phone to prepare it for testing. Here’s a step-by-step guide: Confirm that your device is compatible with ARCore. ARCore is essential for AR Foundation to work correctly. You can find a list of supported devices at https://developers. google.com/ar/devices. Install ARCore, which AR Foundation uses to enable AR capabilities on Android devices. ARCore can be downloaded from the Google Play Store at https://play.google.com/ store/apps/details?id=com.google.ar.core. Activate Developer Options. To do this, open Settings on your Android device, scroll down, and select About Phone. Find Build number and tap it seven times until a message appears stating You are now a developer! Upon returning to the main Settings menu, you should now see an option called Developer Options. If it’s not present, perform an online search to find out how to enable developer mode for your specific device. Though the method described in the previous step is the most common, the variety of Android devices available might require slightly different steps. With Developer Options enabled, turn on USB Debugging. This will allow you to transfer your AR scene to your Android device via a USB cable. Navigate to Settings | Developer options, scroll down to USB Debugging, and switch it on. Acknowledge any pop-up prompts. Depending on your Android version, you might need to allow the installation of apps from unknown sources: For Android versions 7 (Nougat) and earlier: Navigate to Settings | Security Settings and then check the box next to Unknown Sources to allow the installation of apps from sources other than the Google Play Store. For Android versions 8 (Oreo) and above: Select Settings |Apps & Notifications | Special App Access | Install unknown apps and activate Unknown sources. You will see a list of apps that you can grant permission to install from unknown sources. This is where you select the File Manager app, as you’re using it to download the unknown app from Unity. 7. Link your Android device to your computer using a USB cable. You can typically use your device’s charging cable for this. A prompt will appear on your Android device asking for permission to allow USB debugging from your computer. Confirm it. With your Android device properly prepared for testing AR scenes, you can now proceed to deploy your Unity AR scene. This involves adjusting several parameters in the Unity Editor’s Build Settings and Player Settings. Here’s a step-by-step guide on how to do this: Select File | Build Settings | Android, then click the Switch Platform button. Now, your Build Settings should look something like what is illustrated in Figure 4.17.  Figure 4.17 – Unity’s Build Settings configuration for Android Next, click on the Player Settings button. This will open a new window, also called Player Settings. Here, select the Android tab, scroll down, and set Minimum API Level to Android 7.0 Nougat (API level 24) or above. This is crucial, as ARCore requires at least Android 7.0 to function properly. Remaining in the Android tab of Player Settings, enter a package name. Ensure it follows the pattern com.company_name.application_name. This pattern is a widely adopted convention for naming application packages in Android and is used to ensure unique identification for each application on the Google Play Store. Return to File | Build Settings and click the Build and Run button. A new window will pop up, prompting you to create a new folder in your project’s directory. Name this folder Builds. Upon selecting this folder, Unity will construct the scene within the newly created Builds folder. This is how you can set up your Android device for deploying AR scenes onto it. In the next section, you will learn how you can deploy your AR scene onto an iOS device, such as an iPhone or iPad. Deploying onto iOS Before we delve into the process of deploying an AR scene onto an iOS device, it’s important to discuss certain hardware prerequisites. Regrettably, if you’re using a Windows PC and an iOS device, it’s not as straightforward as deploying an AR scene made in Unity. The reason for this is that Apple, in its characteristic style, requires the use of Xcode, its proprietary development environment, as an intermediary step. This is only available on Mac devices, not Windows or Linux. If you don’t possess a Mac, there are still ways to deploy your AR scene onto an iOS device. Here are a few alternatives: Borrowing a Mac: The simplest solution to gain access to Xcode and deploy your app onto an iOS device is to borrow a Mac from a friend or coworker. It’s also worth checking whether local libraries, universities, or co-working spaces offer public access to Macs. For commercial or academic projects, it’s highly recommended to invest in a Mac for testing your AR app on iOS. Using a virtual machine: Another no-cost alternative is to establish a macOS environment on your non-Apple PC. However, Apple neither endorses nor advises this method due to potential legal issues and stability concerns. Therefore, we won’t elaborate further or recommend it. Employing a Unity plugin: Fortunately, a widely used Unity plugin enables deployment of an AR scene onto your iOS device with relatively less hassle. Navigate to Windows | Asset Store, click on Search Online, and Unity Asset Store will open in your default browser. Search for iOS Project Builder for Windows by Pierre-Marie Baty. Though this plugin costs $50, it is a much cheaper alternative than buying a Mac. After purchasing the plugin, import it into your AR scene and configure everything correctly by following the plugin’s documentation (https://www.pmbaty.com/iosbuildenv/documentation/ unity.html). In this article, we focus exclusively on deploying AR applications onto iOS devices using a Mac for running Unity and Xcode. This is due to potential inconsistencies and maintenance concerns with other aforementioned methods. Before you initiate the deployment setup, ensure that your Mac and iOS devices have the necessary software and settings. The following steps detail this preparatory process: Make sure the latest software versions are installed on your MacOS and iOS devices. Check for updates by navigating to Settings | General | Software Update on each device and install any that are available. Confirm that your iOS device supports ARKit, which is crucial for the correct functioning of AR Foundation. You can check compatibility at https://developer.apple.com/ documentation/arkit/. Generally, any device running on iPadOS 11 or iOS 11 and later versions are compatible. You will need an Apple ID for the following steps. If you don’t have one, you can create it at https://appleid.apple.com/account. Download the Xcode software onto your Mac from Apple’s developer website at https:// developer.apple.com/xcode/. Enable Developer Mode on your iOS device by going to Settings | Privacy & Security | Developer Mode, activate Developer Mode, and then restart your device. If you don’t find the Developer Mode option, connect your iOS device to a Mac using a cable. Open Xcode, then navigate to Window | Devices and Simulator. If your device isn’t listed in the left pane, ensure you trust the computer on your device by acknowledging the prompt that appears after you connect your device to the Mac. Subsequently, you can enable Developer Mode on your iOS device. Having set up your Mac and iOS devices correctly, let’s now proceed with how to deploy your AR scene onto your iOS device. Each time you want to deploy your AR scene onto your iOS device, follow these steps: 1. Use a USB cable to connect your iOS device to your Mac. 2. Within your Unity project, navigate to File | Build Settings and select iOS from Platform options. Click the Switch Platform button. 3. Check the Development Build option in Build Settings for iOS. This enables you to deploy the app for testing purposes onto your iOS device. This step is crucial to avoid the annual subscription cost of an Apple Developer account. Note: Deploying apps onto an iOS device with a free Apple Developer account has certain limitations. You can only deploy up to three apps onto your device at once, and they need to be redeployed every 7 days due to the expiration of the free provisioning profile. For industrial or academic purposes, we recommend subscribing to a paid Developer account after thorough testing using the Development Build function. 4. Remain in File | Build Settings | iOS tab, click on Player Settings, scroll down to Bundle Identifier, and input an identifier in the form of com.company_name.application_name. 5. Return to File | Build Settings | iOS tab and click Build and Run. In the pop-up window, create a new folder in your project directory named Builds and select it. 6. Xcode will open with the build, displaying an error message due to the need for a signing certificate. To create this, click on the error message, navigate to the Signing and Capabilities tab, and select the checkbox. In the Team drop-down menu, select New Team, and create a new team consisting solely of yourself. Now, select this newly-created team from the drop-down menu. Ensure that the information in the Bundle Identifier field matches your Unity Project found in Edit | Project Settings | Player. 7. While in Xcode, click on the Any iOS Device menu and select your specific iOS device as the output. 8. Click the Play button on the top left of Xcode and wait for a message indicating Build succeeded. Your AR application should now be on your iOS device. However, you won’t be able to open it until you trust the developer (in this case, yourself). Navigate to Settings | General | VPN & Device Management on your iOS device, tap Developer App certificate under your Apple ID, and then tap Trust (Your Apple ID). 9. On your iOS device’s home screen, click the icon of your AR app. Grant the necessary permissions, such as camera access. Congratulations, you’ve successfully deployed your AR app onto your iOS device! You now know how to deploy your AR experiences onto both Android and iOS devices. ConclusionDeploying AR experiences onto mobile devices opens up a world of possibilities, enabling users to engage with your application in innovative ways. By following the steps outlined in this guide, you can ensure that your AR applications are compatible with both Android and iOS platforms, maximizing their reach and impact. Whether you’re developing for personal use or planning to distribute your app to a broader audience, having cross-platform compatibility from the start can save time and resources in the long run. With the tools and techniques provided here, you are well on your way to creating and deploying compelling AR experiences that captivate users on any mobile device. Looking to dive into the world of virtual, augmented, and mixed reality? XR Development with Unity is the perfect guide for beginners and professionals alike! This book takes you step-by-step through creating immersive experiences using Unity, without needing expensive VR hardware. Learn how to build interactive XR apps, explore hand-tracking, gaze-tracking, multiplayer capabilities, and more. Whether you're a game developer, hobbyist, or industry professional, this resource is a must-have to master cutting-edge XR technologies. Don't miss out—start your XR journey today! Author BioAnna Braun is a Unity expert, who is specialized in creating XR applications. At Deutsche Telekom, Anna has developed XR prototypes in Unity. One prototype enabled warehouse workers to find commodities more easily through the use of special location data and Augmented Reality. At Fraunhofer, Anna specialized in Hand-Tracking and worked on a VR education platform. Her master's degree in Extended Reality has a special focus on Eye Tracking, Deep Learning, and Computer Graphics. She is a published author in the tech space and regularly speaks at conferences hosted by academia or non-profits like the Mozilla Foundation. Anna co-founded a company that offers XR consulting and development.Raffael Rizzo is a XR developer and Unity expert. During his work at Deutsche Telekom, he consulted companies on the use of digital twins and implemented augmented reality wayfinding solutions. At Fraunhofer IGD, Raffael worked on a VR education platform. He developed a VR training program for a soccer academy to test the children's reaction times. For the same academy, Raffael created an application that uses computer vision and machine learning to automatically evaluate ball juggling. His master's degree in Extended Reality encompasses Rendering, Computer Vision, Machine Learning, and 3D Visualization. Raffael co-founded a company specializing in XR consulting and development. 
Read more
  • 0
  • 0
  • 44837
article-image-why-decision-trees-are-more-flexible-than-linear-models-explains-stephen-klosterman
Guest Contributor
11 Dec 2019
7 min read
Save for later

Why decision trees are more flexible than linear models, explains Stephen Klosterman

Guest Contributor
11 Dec 2019
7 min read
This blog post will examine a hypothetical dataset of website visits and customer conversion, to illustrate how decision trees are a more flexible mathematical model than linear models such as logistic regression. Imagine you are monitoring the webpage of one of your products. You are keeping track of how many times individual customers visit this page, the total amount of time they've spent on the page across all their visits, and whether or not they bought the product. Your goal is to be able to predict, for future visitors, how likely they are to buy the product, based on the page visit data. You are considering presenting a discount, or some other kind of offer, to customers you think are likely to buy the product but haven't yet. Get to know more about decision trees and linear models! [box type="shadow" align="" class="" width=""]If you are interested in building your knowledge to prepare data for regularized logistic regression and random forest algorithms, read our book Data Science Projects with Python written by Stephen Klosterman. This book will give you practical guidance on industry-standard data analysis and machine learning tools in Python, with the help of realistic data. You will also learn how to use pandas and Matplotlib to critically examine a dataset with summary statistics and graphs and extract the insights you seek to derive. [/box] After logging the data on many customers, you visualize them and see the following, including some jitter to help see all the data points: There are several interesting patterns visible here. We see that in general, the longer someone spends on the page, the more likely they are to purchase the item. However, this effect seems to depend on the number of visits, in a complex way. Someone who visited the page once and spent at least two minutes there (i.e. two minutes per visit) seems likely to buy, at least up until 18 or so minutes. But someone who visited 10 times as much as this seems likely to buy after only 12 minutes cumulative time (1.2 minutes per visit). Additionally, there is a phenomenon of customers who spend a relatively long time (at least 18 or 19 minutes) over a relatively small number of visits (just one or two), who don't buy. Maybe they opened the page, but then walked away from their computer, and closed the page as soon as they came back. Whatever the reason, the patterns in this data set are interesting and complicated. If you want to create a predictive model of these data, you should consider the likely success of non-linear models, such as decision trees, versus linear models, such as logistic regression (for more information see chapter 3 of my book, Data Science Projects with Python). Logistic Regression as a linear model At a high level, linear models will take the feature space (the two-dimensional space where time is on the x-axis and number of visits is on the y-axis, as in the graph above), and seek to draw a straight line somewhere that creates an accurate division of the two classes of the response variable ("Bought" or "Did not buy"). Consider how likely this is to work. Where would you draw a straight line on the graph above, so that the two regions on either side of the line would primarily contain responses of only one class? It should be apparent that this is not likely to be an entirely successful task. The best you could probably do would be to draw a line that isolates non-buying customers who spent relatively little time on the page, represented by the region of dots to the left of the graph, from the blue dots representing buying customers to the right. While this would basically ignore the little group of customers to the lower right, it's probably the best you could do overall for most customers, using the straight-line approach. In fact, this is essentially what a logistic regression classifier looks like when the model is calibrated to these data. The above graph shows the regions of prediction ("Unlikely to buy" and "Likely to buy") as red or blue shading in the background. Deeper colors indicate a higher likelihood for either class. The conceptual straight-line decision boundary that divides the two regions mentioned above, would run right through the white portion of the background, where the probability of belonging to either class is very low. In other words, the model is "uncertain" about what prediction to make in this region. From the above graph, it can be seen that in addition to ignoring the small group of non-buying customers in the lower right, a straight line is also not a great model for isolating the non-buying customers on the left of the graph. While you can imagine that a curve might be able to define this boundary, a straight line cannot. Decision Trees as a non-linear model How can we do better? Enter non-linear models. Decision trees are a prime example of non-linear models. Decision trees work by dividing the data up into regions based on the "if-then" type of questions. For example, if a user spends less than three minutes over two or fewer visits, how likely are they to buy? Graphically, by asking many of these types of questions, a decision tree can divide up the feature space using little segments of vertical and horizontal lines. This approach can create a much more complex decision boundary, as shown below. It should be clear that decision trees can be used with more success, to model this data set. Given this, you would have a better model for the likelihood of customer conversion and could then proceed to design offers to increase conversion (for more information see chapter 5 of my book, Data Science Projects with Python). In conclusion, this post has shown how non-linear models, such as decision trees, can more effectively describe relationships in complex data sets than linear models, such as logistic regression. It should be noted that linear models can be extended to non-linearity by various means including feature engineering. On the other hand, non-linear models may suffer from overfitting, since they are so flexible. Nonetheless, approaches to prevent decision trees from overfitting have been formulated using ensemble models such as random forests and gradient boosted trees, which are among the most successful machine learning techniques in use today. As a final caveat, note this blog post presents a hypothetical, synthetic data set, which can be modeled almost perfectly with decision trees. Real-world data is messier, but the same principles hold. I hope you found this conceptual discussion helpful. For a more detailed explanation of how decision trees and logistic regression work "under the hood" with real-world data, and the python code for a similar hypothetical example to that shown here, check out my book Data Science Projects with Python. Author Bio Stephen Klosterman is a machine learning data scientist and the author of the book Data Science Projects with Python. He enjoys helping to frame problems in a data science context and delivering machine learning solutions that business stakeholders understand and value. His education includes a Ph.D. in biology from Harvard University, where he was an assistant teacher of the data science course. About the Book This book Data Science Projects with Python will help you understand the working and output of machine learning algorithms and gain insight into not only the predictive capabilities of the models but also their reasons for making these predictions. The book also provides detailed insight on how to build a classification model and how to conduct a financial analysis to find the optimal threshold for binary classification. This will help you with financial budgeting and operational strategy for a well-optimized usage model. At the end of this book, you will be able to confidently use various machine learning algorithms to perform detailed data analysis. Netflix open-sources Metaflow, its Python framework for building and managing data science projects What does a data science team look like? Get Ready for Open Data Science Conference 2019 in Europe and California How to learn data science: from data mining to machine learning Dr.Brandon explains Decision Trees to Jon
Read more
  • 0
  • 0
  • 44780

article-image-best-practices-for-restful-web-services-naming-conventions-and-api-versioning-tutorial
Sugandha Lahoti
12 Jul 2019
12 min read
Save for later

Best practices for RESTful web services : Naming conventions and API Versioning [Tutorial]

Sugandha Lahoti
12 Jul 2019
12 min read
This article covers two important best practices for REST and RESTful APIs: Naming conventions and API Versioning. This article is taken from the book Hands-On RESTful Web Services with TypeScript 3 by Biharck Muniz Araújo. This book will guide you in designing and developing RESTful web services with the power of TypeScript 3 and Node.js. What are naming conventions One of the keys to achieving a good RESTful design is naming the HTTP verbs appropriately. It is really important to create understandable resources that allow people to easily discover and use your services. A good resource name implies that the resource is intuitive and clear to use. On the other hand, the usage of HTTP methods that are incompatible with REST patterns creates noise and makes the developer's life harder. In this section, there will be some suggestions for creating clear and good resource URIs. It is good practice to expose resources as nouns instead of verbs. Essentially, a resource represents a thing, and that is the reason you should use nouns. Verbs refer to actions, which are used to factor HTTP actions. Three words that describe good resource naming conventions are as follows: Understandability: The resource's representation format should be understandable and utilizable by both the server and the client Completeness: A resource should be completely represented by the format Linkability: A resource can be linked to another resource Some example resources are as follows: Users of a system Blogs posts An article Disciplines in which a student is enrolled Students in which a professor teaches A blog post draft Each resource that's exposed by any service in a best-case scenario should be exposed by a unique URI that identifies it. It is quite common to see the same resource being exposed by more than one URI, which is definitely not good. It is also good practice to do this when the URI makes sense and describes the resource itself clearly. URIs need to be predictable, which means that they have to be consistent in terms of data structure. In general, this is not a REST required rule, but it enhances the service and/or the API. A good way to write good RESTful APIs is by writing them while having your consumers in mind. There is no reason to write an API and name it while thinking about the APIs developers rather than its consumers, who will be the people who are actually consuming your resources and API (as the name suggests). Even though the resource now has a good name, which means that it is easier to understand, it is still difficult to understand its boundaries. Imagine that services are not well named; bad naming creates a lot of chaos, such as business rule duplications, bad API usage, and so on. In addition to this, we will explain naming conventions based on a hypothetical scenario. Let's imagine that there is a company that manages orders, offers, products, items, customers, and so on. Considering everything that we've said about resources, if we decided to expose a customer resource and we want to insert a new customer, the URI might be as follows: POST https://<HOST>/customers The hypothetical request body might be as follows: { "fist-name" : "john", "last-name" : "doe", "e-mail" : "john.doe@email.com" } Imagine that the previous request will result in a customer ID of 445839 when it needs to recover the customer. The GET method could be called as follows: GET https://<HOST>/customers/445839 The response will look something like this: sample body response for customer #445839: { "customer-id": 445839, "fist-name" : "john", "last-name" : "doe", "e-mail" : "john.doe@email.com" } The same URI can be used for the PUT and DELETE operations, respectively: PUT https://<HOST>/customers/445839 The PUT body request might be as follows: { "last-name" : "lennon" } For the DELETE operation, the HTTP request to the URI will be as follows: DELETE https://<HOST>/customers/445839 Moving on, based on the naming conventions, the product URI might be as follows: POST https://<HOST>/products sample body request: { "name" : "notebook", "description" : "and fruit brand" } GET https://<HOST>/products/9384 PUT https://<HOST>/products/9384 sample body request: { "name" : "desktop" } DELETE https://<HOST>/products/9384 Now, the next step is to expose the URI for order creation. Before we continue, we should go over the various ways to expose the URI. The first option is to do the following: POST https://<HOST>/orders However, this could be outside the context of the desired customer. The order exists without a customer, which is quite odd. The second option is to expose the order inside a customer, like so: POST https://<HOST>/customers/445839/orders Based on that model, all orders belong to user 445839. If we want to retrieve those orders, we can make a GET request, like so: GET https://<HOST>/customers/445839/orders As we mentioned previously, it is also possible to write hierarchical concepts when there is a relationship between resources or entities. Following the same idea of orders, how should we represent the URI to describe items within an order and an order that belongs to user 445839? First, if we would like to get a specific order, such as order 7384, we can do that like so: GET https://<HOST>/customers/445839/orders/7384 Following the same approach, to get the items, we could use the following code: GET https://<HOST>/customers/445839/orders/7384/items The same concept applies to the create process, where the URI is still the same, but the HTTP method is POST instead of GET. In this scenario, the body also has to be sent: POST https://<HOST>/customers/445839/orders/7384 { "id" : 7834, "quantity" : 10 } Now, you should have a good idea of what the GET operation offers in regard to orders. The same approach can also be applied so that you can go deeper and get a specific item from a specific order and from a specific user: GET https://<HOST>/customers/445839/orders/7384/items/1 Of course, this hierarchy applies to the PUT, PATCH, and POST methods, and in some cases, the DELETE method as well. It will depend on your business rules; for example, can the item be deleted? Can I update an order? What is API versioning As APIs are being developed, gathering more business rules for their context on a day-to-day basis, generating tech debits and maturing, there often comes a point where teams need to release breaking functionality. It is also a challenge to keep their existing consumers working perfectly. One way to keep them working is by versioning APIs. Breaking changes can get messy. When something changes abruptly, it often generates issues for consumers, as this usually isn't planned and directly affects the ability to deliver new business experiences. There is a variant that says that APIs should be versionless. This means that building APIs that won't change their contract forces every change to be viewed through the lens of backward compatibility. This drives us to create better API interfaces, not only to solve any current issues, but to allow us to build APIs based on foundational capabilities or business capabilities themselves. Here are a few tips that should help you out: Put yourself in the consumer's shoes: When it comes to product perspective, it is suggested that you think from the consumer's point of view when building APIs. Most breaking changes happen because developers build APIs without considering the consumers, which means that they are building something for themselves and not for the real users' needs. Contract-first design: The API interface has to be treated as a formal contract, which is harder to change and more important than the coding behind it. The key to API design success is understanding the consumer's needs and the business associated with it to create a reliable contract. This is essentially a good, productive conversation between the consumers and the producers. Requires tolerant readers: It is quite common to add new fields to a contract with time. Based on what we have learned so far, this could generate a breaking change. This sometimes occurs because, unfortunately, many consumers utilize a deserializer strategy, which is strict by default. This means that, in general, the plugin that's used to deserialize throws exceptions on fields that have never been seen before. It is not recommended to version APIs, but only because you need to add a new optional field to the contract. However, in the same way, we don't want to break changes on the client side. Some good advice is documenting any changes, stating that new fields might be added so that the consumers aren't surprised by any new changes. Add an object wrapper: This sounds obvious, but when teams release APIs without object wrappers, the APIs turn on hard APIs, which means that they are near impossible to evolve without having to make breaking changes. For instance, let's say your team has delivered an API based on JSON that returns a raw JSON array. So far, so good. However, as they continue, they find out that they have to deal with paging, or have to internationalize the service or any other context change. There is no way of making changes without breaking something because the return is based on raw JSON. Always plan to version: Don't think you have built the best turbo API in the world ever. APIs are built with a final date, even though you don't know it yet. It's always a good plan to build APIs while taking versioning into consideration. Including the version in the URL Including the version in the URL is an easy strategy for having the version number added at the end of the URI. Let's see how this is done: https://api.domain.com/v1/ https://api.domain.com/v2/ https://api.domain.com/v3/ Basically, this model tells the consumers which API version they are using. Every breaking change increases the version number. One issue that may occur when the URI for a resource changes is that the resource may no longer be found with the old URI unless redirects are used. Versioning in the subdomain In regard to versioning in the URL, subdomain versioning puts the version within the URI but associated with the domain, like so: https://v1.api.domain.com/ https://v2.api.domain.com/ https://v3.api.domain.com/ This is quite similar to versioning at the end of the URI. One of the advantages of using a subdomain strategy is that your API can be hosted on different servers. Versioning on media types Another approach to versioning is using MIME types to include the API version. In short, API producers register these MIME types on their backend and then the consumers need to include accept and content-type headers. The following code lets you use an additional header: GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/json Version: 1 GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/json Version: 2 GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/json Version: 3 The following code lets you use an additional field in the accept/content-type header: GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/json; version=1 GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/json; version=2 GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/json; version=3 The following code lets you use a Media type: GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/vnd.<host>.orders.v1+json GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/vnd.<host>.orders.v2+json GET https://<HOST>/orders/1325 HTTP/1.1 Accept: application/vnd.<host>.orders.v3+json Recommendation When using a RESTful service, it is highly recommended that you use header-based versioning. However, the recommendation is to keep the version in the URL. This strategy allows the consumers to open the API in a browser, send it in an email, bookmark it, share it more easily, and so on. This format also enables human log readability. There are also a few more recommendations regarding API versioning: Use only the major version: API consumers should only care about breaking changes. Use a version number: Keep things clear; numbering the API incrementally allows the consumer to track evolvability. Versioning APIs using timestamps or any other format only creates confusion in the consumer's mind. This also exposes more information about versioning than is necessary. Require that the version has to be passed: Even though this is more convenient from the API producer's perspective, starting with a version is a good strategy because the consumers will know that the API version might change and they will be prepared for that. Document your API time-to-live policy: Good documentation is a good path to follow. Keeping everything well-described will mean that consumers avoid finding out that there is no Version 1 available anymore because it has been deprecated. Policies allow consumers to be prepared for issues such as depreciation. In this article, we learned about best practices related to RESTful web services such naming conventions, and API versioning formats. Next, to look at how to design RESTful web services with OpenAPI and Swagger, focusing on the core principles while creating web services, read our book Hands-On RESTful Web Services with TypeScript 3. 7 reasons to choose GraphQL APIs over REST for building your APIs Which Python framework is best for building RESTful APIs? Django or Flask? Understanding advanced patterns in RESTful API [Tutorial]
Read more
  • 0
  • 0
  • 44761
Modal Close icon
Modal Close icon