Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-recode-decode-googlewalkout-interview-shows-why-data-and-evidence-dont-always-lead-to-right-decisions-in-even-the-worlds-most-data-driven-company
Natasha Mathur
23 Nov 2018
10 min read
Save for later

Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company

Natasha Mathur
23 Nov 2018
10 min read
Earlier this month, 20,000 Google employees along with temps, Vendors, and Contractors walked out of their respective Google offices to protest against the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. As a part of the walkout, Google employees had laid out five demands urging Google to bring about structural changes within the workplace. In the latest episode of Recode Decode with Kara Swisher, yesterday, six of the Google walkout organizers, namely, Erica Anderson, Claire Stapleton, Meredith Whittaker, Stephanie Parker, Cecilia O’Neil-Hart and Amr Gaber spoke out about Google’s dismissive approach towards the five demands laid out by the Google employees. A day after the Walkout, Google addressed these demands in a note written by Sundar Pichai, where he admitted that they have “not always gotten everything right in the past” and are “sincerely sorry”. Pichai also mentioned that  “It’s clear that to live up to the high bar we set for Google, we need to make some changes. Going forward, we will provide more transparency into how you raise concerns and how we handle them”. The 'walkout for real change' was a response to the New York Times report, published last month, that exposed how Google has protected its senior executives (Andy Rubin, Android Founder being one of them) that had been accused of sexual misconduct in the recent past. We’ll now have a look at the major highlights from the podcast. Key Takeaways The podcast talks about how the organizers formulated their demands, the rights of contractors at Google, post walkout town hall meeting, and what steps will be taken next by the Google employees. How the walkout mobilized collective action and the formulation of demands As per the Google employees, collating demands was a collective effort from the very beginning. They were inspired by stories of sexual harassment at Google that were floating around in an internal email chain. This urged the organizers of the walkout to send out an email to a large group of women stating that they need to do something about it, to which a lot of employees suggested that they should put out their demands. A doc was prepared in Google Doc Live that listed all the suggested demands by the fellow Googlers. “it was just this truly collective action, living, moving in a Google Document that we were all watching and participating in” said Cecelia O’Neil Hart, a marketer at YouTube.  Cecelia also pointed out that the demands that were being collected were not new and had represented the voices of a lot of groups at Google. “It was just completely a process of defining what we wanted in solidarity with each other. I think it showed me the power of collective action, writing the demands quite literally as a collective” said Cecelia. Rights of Contractors One of the demands laid out by the Google employees as a part of the walkout, states, “commitment to ending pay and opportunity inequity for all levels of the organization”. They expected a change that is applicable to not just full-time employees, but also contract workers as well as subcontract workers, as they are the ones who work at Google with rights that are restricted and different than those of the full-time employees. “We have contractors that manage teams of upwards of 10, 20, even more, other people but left in this second-class state where they don’t have healthcare benefits, they don’t have paid sick leave and they definitely don’t get access to the same well-being resources: Counseling, professional development, any of that”, adds Stephanie Parker, a policy specialist on Trust and Safety, YouTube. Other examples of discrimination against contractors at Google include the shooting at YouTube Headquarters in April where contractor workers (security guards, cafeteria workers, etc) were excluded from the post-shooting town hall meeting conducted by Susan Wojcicki, CEO, YouTube. Also, while the shooting was taking place, all the employees were being updated on the Security via texts, except the contractors. Similarly, the contractors were not allowed in the town hall meeting that was conducted six days post walkout, although the demands applied to them just as much as it did to full-time employees. There’s also systemic racism in hiring and promotion for certain job ladders like engineering, versus other job ladders, versus contract work. Parker mentioned that by including contractors in the five demands, they wanted to bring it to everyone’s attention that despite Google striving to be a company with the best workplace that offers the best benefits, it’s quite far-off from leading in that space. “The solution is to convert them to full-time or to treat them fairly with respect. Not to throw up our hands and say, “Oh well” said Parker. Post walkout town hall meeting Six days after the walkout, a mail was sent over to the employees regarding the town hall meeting, which Google said was accidentally “leaked”. Stapleton, a marketing manager at YouTube, says that the “the town hall was really tough to watch” and that the Google executives “did not ever address, acknowledge, the list of demands nor did they adequately provide solutions to all the five. They did drop forced arbitration, but for sexual harassment only, not discrimination, which was a key omission”. As per the employees, Google seemed to use the same old methods to get the situation under control. Google said that they’ll be focusing on committing to the OKRs (Objective and Key Result) i.e. the main goal for the company as a whole. Moreover, they also tried to play down the other concerns and core issues such as discrimination (apart from sexual), racism, and the abuse of power while only focussing on one kind of behavior i.e. sexual assault. They mentioned how Google refused to address any issues surrounding the TVCs (temps, vendors, and contractors), despite being asked about it in the town hall. Also, Google did not acknowledge that the HR processes and systems within the company are not working. Instead, Google decided to conduct a survey to ensure how people really feel about the HR teams within the workplace. “They heard loud and clear from 20,000 of us that these processes and reporting lines that are in place are set up the wrong way and need to be redesigned so that we normal employees have more of a say and more of a look into the decision-making processes, and they didn’t even acknowledge that as a valid sentiment or idea”, said Parker. All in all, there wasn’t much “leadership”, and there wasn’t an understanding that “accountability was necessary”. Employees want their demands to be met Employees want an employee representative on board to speak on behalf of all the employees. They want accountability systems in place and for Google to begin analyzing the cultures within companies that use racism, discrimination, abuse of power, sexism, the kind that excludes many from power and accrue resources to only a few. The employees acknowledge that Google is continuing to discuss and talk about the issue, but that the employees would have to keep pushing the conversation forward every step of the way. “I think we need to not be afraid to say the real words. I want to hear our execs say the real words like “discrimination,” which was erased from their response to the demands. Like ‘systemic racism’.I want to hear those real words” said Cecelia. Employees also want the demand no. 2 i.e. ending pay inequity specifically to be addressed by Google as all they keep getting in response is that Google is “looking into it” and “studying” about it. “I think that what they have to do is embrace the tough critique that they’ve gotten and try to understand where we’re coming from and make these changes, and make them in collaboration with us, which has not happened,” said Stapleton. Employees continue to be cautiously hopeful Employees believe that Google has incredible people at the company. Thousands of people came together and worked on their vision for the world altogether on something that really mattered. “You know, we’ve called this the ‘Walkout for Real Change’ for a reason. Even if all of our optimism comes true and the best outcome and our demands are met, real change happens over time and we’re going to hold people accountable to that real change actually going down, and hold us accountable for demanding it also, because we’ve got to get the rest of the demands met”, says Cecelia. Our thoughts on this topic Just as history has proven time and again, information and data can be used to drive a narrative that benefits the storyteller and their agendas. Based on collecting feedback from workers across the company, the Google walkout organizers pointed out systemic issues within the company that enabled the sexual predatory behavior. They pointed out that sexual harassment is one of the symptoms and not the cause. They demanded that the root causes be addressed holistically through their set of five demands. To extinguish a movement or dissension in its infancy, regimes and corporations throughout history have used the following tactics: Be the benevolent ruler Divide and conquer the crowd by appealing to individual group needs but never to everyone’s collective demands Find a middle ground by agreeing to some demands while signaling that the other side also takes a few steps forward thereby disengaging those whose demands aren’t met. This would weaken the movement’s leadership Use the information to support the status quo. Promote the influencers into top management roles It appears that Google is using a lot of the approaches to appease the walkout participants. The Google management adopted classic labor negotiation tactics by sanctioning the protest, also encouraging managers to participate, then agreeing to adopt the easiest item on the list of demands which have already been implemented in some other tech companies but restricted it to only their employees. But restricting the reforms to only their employees, and creating a larger distance for TVCs, they seem to be thinning out the protesting crowd. By not engaging in open dialog on all key issues highlighted and by removing key decision makers on top out of the town hall, they have created a situation for deniability. Lastly, by going back to surveying sentiments on key issues, they are not only relying on time to subdue anger felt but also on the grassroots voice to dissipate. Will this be the tipping point for Google employees to unionize? BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration” OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Following Google, Facebook changes its forced arbitration policy for sexual harassment claims
Read more
  • 0
  • 0
  • 43899

article-image-extracting-data-physically-dd
Packt
30 Apr 2015
10 min read
Save for later

Extracting data physically with dd

Packt
30 Apr 2015
10 min read
 In this article by Rohit Tamma and Donnie Tindall, authors of the book Learning Android Forensics, we will cover physical data extraction using free and open source tools wherever possible. The majority of the material covered in this article will use the ADB methods. (For more resources related to this topic, see here.) The dd command should be familiar to any examiner who has done traditional hard drive forensics. The dd command is a Linux command-line utility used by definition to convert and copy files, but is frequently used in forensics to create bit-by-bit images of entire drives. Many variations of the dd commands also exist and are commonly used, such as dcfldd, dc3dd, ddrescue, and dd_rescue. As the dd command is built for Linux-based systems, it is frequently included on Android platforms. This means that a method for creating an image of the device often already exists on the device! The dd command has many options that can be set, of which only forensically important options are listed here. The format of the dd command is as follows: dd if=/dev/block/mmcblk0 of=/sdcard/blk0.img bs=4096 conv=notrunc,noerror,sync if: This option specifies the path of the input file to read from. of: This option specifies the path of the output file to write to. bs: This option specifies the block size. Data is read and written in the size of the block specified, defaults to 512 bytes if not specified. conv: This option specifies the conversion options as its attributes: notrunc: This option does not truncate the output file. noerror: This option continues imaging if an error is encountered. sync: In conjunction with the noerror option, this option writes x00 for blocks with an error. This is important for maintaining file offsets within the image. Do not mix up the if and of flags, this could result in overwriting the target device! A full list of command options can be found at http://man7.org/linux/man-pages/man1/dd.1.html. Note that there is an important correlation between the block size and the noerror and sync flags: if an error is encountered, x00 will be written for the entire block that was read (as determined by the block size). Thus, smaller block sizes result in less data being missed in the event of an error. The downside is that, typically, smaller block sizes result in a slower transfer rate. An examiner will have to decide whether a timely or more accurate acquisition is preferred. Booting into recovery mode for the imaging process is the most forensically sound method. Determining what to image When imaging a computer, an examiner must first find what the drive is mounted as; /dev/sda, for example. The same is true when imaging an Android device. The first step is to launch the ADB shell and view the /proc/partitions file using the following command: cat /proc/partitions The output will show all partitions on the device: In the output shown in the preceding screenshot, mmcblk0 is the entirety of the flash memory on the device. To image the entire flash memory, we could use /dev/blk/mmcblk0 as the input file flag (if) for the dd command. Everything following it, indicated by p1- p29, is a partition of the flash memory. The size is shown in blocks, in this case the block size is 1024 bytes for a total internal storage size of approximately 32 GB. To obtain a full image of the device's internal memory, we would run the dd command with mmcblk0 as the input file. However, we know that most of these partitions are unlikely to be forensically interesting; we're most likely only interested in a few of them. To view the corresponding names for each partition, we can look in the device's by-name directory. This does not exist on every device, and is sometimes in a different path, but for this device it is found at /dev/block/msm_sdcc.1/by-name. By navigating to that directory and running the ls -al command, we can see to where each block is symbolically linked as shown in the following screenshot: If our investigation was only interested in the userdata partition, we now know that it is mmcblk0p28, and could use that as the input file to the dd command. If the by-name directory does not exist on the device, it may not be possible to identify every partition on the device. However, many of them can still be found by using the mount command within the ADB shell. Note that the following screenshot is from a different device that does not contain a by-name directory, so the data partition is not mmcblk0p28: On this device, the data partition is mmcblk0p34. If the mount command does not work, the same information can be found using the cat /proc/mounts command. Other options to identify partitions depending on the device are the cat /proc/mtd or cat /proc/yaffs commands; these may work on older devices. Newer devices may include an fstab file in the root directory (typically called fstab.<device>) that will list mountable partitions. Writing to an SD card The output file of the dd command can be written to the device's SD card. This should only be done if the suspect SD card can be removed and replaced with a forensically sterile SD to ensure that the dd command's output is not overwriting evidence. Obviously, if writing to an SD card, ensure that the SD card is larger than the partition being imaged. On newer devices, the /sdcard partition is actually a symbolic link to /data/media. In this case, using the dd command to copy the /data partition to the SD card won't work, and could corrupt the device because the input file is essentially being written to itself. To determine where the SD card is symbolically linked to, simply open the ADB shell and run the ls -al command. If the SD card partition is not shown, the SD likely needs to be mounted in recovery mode using the steps shown in this article, Extracting Data Logically from Android Devices. In the following example, /sdcard is symbolically linked to /data/media. This indicates that the dd command's output should not be written to the SD card. In the example that follows, the /sdcard is not a symbolic link to /data, so the dd command's output can be used to write the /data partition image to the SD card: On older devices, the SD card may not even be symbolically linked. After determining which block to read and to where the SD card is symbolically linked, image the /data partition to the /sdcard, using the following command: dd if=/dev/block/mmcblk0p28 of=/sdcard/data.img bs=512 conv=notrunc,noerror,sync Now, an image of the /data partition exists on the SD card. It can be pulled to the examiner's machine with the ADB pull command, or simply read from the SD card. Writing directly to an examiner's computer with netcat If the image cannot be written to the SD card, an examiner can use netcat to write the image directly to their machine. The netcat tool is a Linux-based tool used for transferring data over a network connection. We recommend using a Linux or a Mac computer for using netcat as it is built-in, though Windows versions do exist. The examples below were done on a Mac. Installing netcat on the device Very few Android devices, if any, come with netcat installed. To check, simply open the ADB shell and type nc. If it returns saying nc is not found, netcat will have to be installed manually on the device. Netcat compiled for Android can be found at many places online. We have shared the version we used at http://sourceforge.net/projects/androidforensics-netcat/files/. If we look back at the results from our mount command in the previous section, we can see that the /dev partition is mounted as tmpfs. The Linux term tmpfs means that the partition is meant to appear as an actual filesystem on the device, but is truly only stored in RAM. This means we can push netcat here without making any permanent changes to the device using the following command on the examiner's computer: adb push nc /dev/Examiner_Folder/nc The command should have created the Examiner_Folder in /dev, and nc should be in it. This can be verified by running the following command in the ADB shell: ls /dev/Examiner_Folder Using netcat Now that the netcat binary is on the device, we need to give it permission to execute from the ADB shell. This can be done as follows: chomd +x /dev/Examiner_Folder/nc We will need two terminal windows open with the ADB shell open in one of them. The other will be used to listen to the data being sent from the device. Now we need to enable port forwarding over ADB from the examiner's computer: adb forward tcp:9999 tcp:9999 9999 is the port we chose to use for netcat; it can be any arbitrary port number between 1023 and 65535 on a Linux or Mac system (1023 and below are reserved for system processes, and require root permission to use). Windows will allow any port to be assigned. In the terminal window with ADB shell, run the following command: dd if=/dev/block/mmcblk0p34 bs=512 conv=notrunc,noerror,sync | /dev/Examiner_Folder/nc –l –p 9999 mmcblk0p34 is the user data partition on this device, however, the entire flash memory or any other partition could also be imaged with this method. In most cases, it is best practice to image the entirety of the flash memory in order to acquire all possible data from the device. Some commercial forensic tools may also require the entire memory image, and may not properly handle an image of a single partition. In the other terminal window, run: nc 127.0.0.1 9999 > data_partition.img The data_partition.img file should now be created in the current directory of the examiner's computer. When the data is finished transferring, netcat in both terminals will terminate and return to the command prompt. The process can take a significant amount of time depending on the size of the image. Summary This article discussed techniques used for physically imaging internal memory or SD cards and some of the common problems associated with dd are as follows: Usually pre-installed on device May not work on MTD blocks Does not obtain Out-of-Band area Additionally, each imaging technique can be used to either save the image on the device (typically on the SD card), or used with netcat to write the file to the examiner's computer: Writing to SD card: Easy, doesn't require additional binaries to be pushed to the device Familiar to most examiners Cannot be used if SD card is symbolically linked to the partition being imaged Cannot be used if the entire memory is being imaged Using netcat: Usually requires yet another binary to be pushed to the device Somewhat complicated, must follow steps exactly Works no matter what is being imaged May be more time consuming than writing to the SD Resources for Article: Further resources on this subject: Reversing Android Applications [article] Introduction to Mobile Forensics [article] Processing the Case [article]
Read more
  • 0
  • 0
  • 43871

article-image-should-you-use-javascript-for-machine-learning-and-how-do-you-get-started
Sugandha Lahoti
01 Sep 2018
4 min read
Save for later

Why use JavaScript for machine learning?

Sugandha Lahoti
01 Sep 2018
4 min read
Python has always been and remains the language of choice for machine learning, in part due to the maturity of the language, in part due to the maturity of the ecosystem, and in part due to the positive feedback loop of early ML efforts in Python. Recent developments in the JavaScript world, however, are making JavaScript more attractive to ML projects. I think we will see a major ML renaissance in JavaScript within a few years, especially as laptops and mobile devices become ever more powerful and JavaScript itself surges in popularity. This post is extracted from the book  Hands-on Machine Learning with JavaScript by Burak Kanber. The book is a  definitive guide to creating intelligent web applications with the best of machine learning and JavaScript. Advantages and challenges of JavaScript JavaScript, like any other tool, has its advantages and disadvantages. Much of the historical criticism of JavaScript has focused on a few common themes: strange behavior in type coercion, the prototypical object-oriented model, difficulty organizing large codebases, and managing deeply nested asynchronous function calls with what many developers call callback hell. Fortunately, most of these historic gripes have been resolved by the introduction of ES6, that is, ECMAScript 2015, a recent update to the JavaScript syntax. Con: Immature ecosystem for machine learning development Despite the recent language improvements, most developers would still advise against using JavaScript for ML for one reason: the ecosystem. The Python ecosystem for ML is so mature and rich that it's difficult to justify choosing any other ecosystem. But this logic is self-fulfilling and self-defeating; we need brave individuals to take the leap and work on real ML problems if we want JavaScript's ecosystem to mature. Fortunately, JavaScript has been the most popular programming language on GitHub for a few years running, and is growing in popularity by almost every metric. Pro #1: JavaScript is the most popular web development language with a mature npm ecosystem There are some advantages to using JavaScript for ML. Its popularity is one; while ML in JavaScript is not very popular at the moment, the language itself is. As demand for ML applications rises, and as hardware becomes faster and cheaper, it's only natural for ML to become more prevalent in the JavaScript world. There are tons of resources available for learning JavaScript in general, maintaining Node.js servers, and deploying JavaScript applications. The Node Package Manager (npm) ecosystem is also large and still growing, and while there aren't many very mature ML packages available, there are a number of well built, useful tools out there that will come to maturity soon. Pro #2: JavaScript is now a general purpose, cross-platform programming language Another advantage to using JavaScript is the universality of the language. The modern web browser is essentially a portable application platform which allows you to run your code, basically without modification, on nearly any device. Tools like electron (while considered by many to be bloated) allow developers to quickly develop and deploy downloadable desktop applications to any operating system. Node.js lets you run your code in a server environment. React Native brings your JavaScript code to the native mobile application environment, and may eventually allow you to develop desktop applications as well. JavaScript is no longer confined to just dynamic web interactions, it's now a general-purpose, cross-platform programming language. Pro #3: JavaScript makes Machine Learning accessible to web and front-end developers Finally, using JavaScript makes ML accessible to web and frontend developers, a group that historically has been left out of the ML discussion. Server-side applications are typically preferred for ML tools, since the servers are where the computing power is. That fact has historically made it difficult for web developers to get into the ML game, but as hardware improves, even complex ML models can be run on the client, whether it's the desktop or the mobile browser. If web developers, frontend developers, and JavaScript developers all start learning about ML today, that same community will be in a position to improve the ML tools available to us all tomorrow. If we take these technologies and democratize them, expose as many people as possible to the concepts behind ML, we will ultimately elevate the community and seed the next generation of ML researchers. Summary In this article, we've discussed the important moments of JavaScript's history as applied to ML. We’ve discussed some advantages to using JavaScript for machine learning, and also some of the challenges we’re facing, particularly in terms of the machine learning ecosystem. To begin exploring and processing the data itself, read our book  Hands-on Machine Learning with JavaScript. 5 JavaScript machine learning libraries you need to know V8 JavaScript Engine releases version 6.9! HTML5 and the rise of modern JavaScript browser APIs [Tutorial]
Read more
  • 0
  • 0
  • 43869

article-image-12-visual-studio-code-extensions-that-node-js-developers-will-love-sponsored-by-microsoft
Richard Gall
05 Jun 2019
7 min read
Save for later

12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]

Richard Gall
05 Jun 2019
7 min read
Visual Studio Code might have appeared as a bit of a surprise when it was first launched by Microsoft - why reach out to JavaScript developers? When did Node.js developers become so irresistible? However, once you take a look inside you can begin to see why Visual Studio Code represents such an enticing proposition for Node.js and other JavaScript developers. Put simply, the range of extensions available is unmatched by any other text editor. Extensions are almost like apps when you’re using Visual Studio Code. In fact, there’s pretty much an app store where you can find extensions for a huge range of tasks. These extensions are designed with productivity in mind, but they’re not just time saving tools. Arguably, visual studio code extensions are what allows Visual Studio Code to walk the line between text editor and IDE. They give you more functionality than you might get from an ordinary text editor, but it’s still lightweight enough to not carry the baggage an IDE likely will. With a growing community around Visual Studio Code, the number of extensions is only going to grow. And if there’s something missing, you can always develop one yourself. But before we get ahead of ourselves, let’s take a brief look at some of the best Visual Studio Code extensions for Node.js developers - as well as a few others… This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free courtesy of Microsoft here. The best Node.js Visual Studio Code extensions Node.js Modules Intellisense If you’re a Node.js developer the Node.js Modules Intellisense extension is vital. Basically, it will autocomplete JavaScript (or TypeScript) statements. npm Intellisense Npm is such a great part of working with Node.js. It’s such a simple thing, but it has provided a big shift in the way we approach application development, giving you immediate access to the modules you need to run your application. With Visual Studio Code, it’s even easier. In fact, it’s pretty obvious what npm Intellisense does - it autocompletes npm modules into your code when you try and import them. Search Node_Modules Sometimes, you might want to edit a file within the node_modules folder. To do this, you’ll probably have to do some manual work to find the one you want. Fortunately, with the Search Node_Modules extension, you can quickly navigate the files inside your node_modules folder. Node Exec Node Exec is another simple but very neat extension. It lets you quickly execute code or your current file using Node. Once you’ve installed it, all you need to do is hit F8 (or run the Execute node.js command). Node Readme Documentation is essential. If 2018 has taught us anything it’s transparency, so we could all do with an easier way to ensure that documentation is built into our workflows and boosts rather than drains our productivity. This is why Node readme is such a nice extension - it simply allows you to quickly open the documentation for a particular package. View Node Package Like Node readme, the View Node Package extension helps you quickly get a better understanding of a particular package while remaining inside VSC. You can take a look inside the project’s repository without having to leave Visual Studio Code. Read next: 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Other useful Visual Studio Code extensions While the extensions above are uniquely useful if you’re working with Node, there are other extensions that could prove invaluable. From cleaner code and debugging, to simple deployment, as a Microsoft rival once almost said, there’s an extension for that… ESLint ESLint is perhaps one of the most popular extensions in the Visual Studio Code marketplace. Helping developers fix troublesome or inconsistent aspects of code, by bringing ESLint into your development workflow you can immediately solve one of the trickier aspects of JavaScript - the fact it can pick up errors sometimes a little too easily. ESLint will typically lint an individual file on typing. However, it’s possible to perform the action on an entire workspace. All you need to do is set eslint.provideLintTask to true. JavaScript ES6 Code Snippets This is a must-have extension for any JavaScript developer working in Visual Studio Code. While VSC does have in-built snippets, this extension includes ES6 snippets to make you that little bit more productive. The extension has import and export snippets, class helpers, and methods. Debugger for Chrome Debugging code in the browser is the method of choice, particularly if you’re working on the front end. But that means you’ll have to leave your code editor - fine, but necessary, right? You won’t need to do that anymore thanks to the Debugger for Chrome extension. It does exactly what it says on the proverbial tin: you can debug in Chrome without leaving VSC. Live Server Live Server is a really nice extension that has seen a huge amount of uptake from the community. At the time of writing, it has received a remarkable 2.2 million downloads. The idea is simple: you can launch a local development server that responds in real-time to the changes you make in your editor. What makes this particularly interesting, not least for Node developers, is that it works for server side code as well as static files. Settings Sync Settings Sync is a nice extension for those developers that find themselves working on different machines. Basically, it allows you to run the same configuration of Visual Studio Code across different instances. Of course, this is also helpful if you’ve just got your hands on a new laptop and are dreading setting everything up all over again… Live Share Want to partner with a colleague on a project or work together to solve a problem? Ordinarily, you might have had to share screens or work together via a shared repository, but thanks to Live Share, you can simply load up someone else’s project in your editor. Azure Functions Most of the extensions we’ve seen will largely help you write better code and become a more productive developer. But the Azure Functions extension is another step up - it lets you build, deploy and debug a serverless app inside visual studio code. It’s currently only in preview, but if you’re new to serverless, it does offer a nice way of seeing how it’s done in practice! Read next: 5 developers explain why they use Visual Studio Code [Sponsored by Microsoft] Start exploring Visual Studio Code This list is far from exhaustive. The number of extensions available in the Visual Studio Code marketplace is astonishing - you’re guaranteed to find something you’ll find useful. The best way to get started is simply to download Visual Studio Code and try it out for yourself. Let us know how it compares to other text editors - what do you like? And what would you change? You can download Visual Studio Code here. Find out how to get started with Node.js on Azure. Download Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 43810

article-image-python-design-patterns-depth-singleton-pattern
Packt
15 Feb 2016
14 min read
Save for later

Python Design Patterns in Depth: The Singleton Pattern

Packt
15 Feb 2016
14 min read
There are situations where you need to create only one instance of data throughout the lifetime of a program. This can be a class instance, a list, or a dictionary, for example. The creation of a second instance is undesirable. This can result in logical errors or malfunctioning of the program. The design pattern that allows you to create only one instance of data is called singleton. In this article, you will learn about module-level, classic, and borg singletons; you'll also learn about how they work, when to use them, and build a two-threaded web crawler that uses a singleton to access the shared resource. (For more resources related to this topic, see here.) Singleton is the best candidate when the requirements are as follows: Controlling concurrent access to a shared resource If you need a global point of access for the resource from multiple or different parts of the system When you need to have only one object Some typical use cases of using a singleton are: The logging class and its subclasses (global point of access for the logging class to send messages to the log) Printer spooler (your application should only have a single instance of the spooler in order to avoid having a conflicting request for the same resource) Managing a connection to a database File manager Retrieving and storing information on external configuration files Read-only singletons storing some global states (user language, time, time zone, application path, and so on) There are several ways to implement singletons. We will look at module-level singleton, classic singletons, and borg singleton. Module-level singleton All modules are singletons by nature because of Python's module importing steps: Check whether a module is already imported. If yes, return it. If not, find a module, initialize it, and return it. Initializing a module means executing a code, including all module-level assignments. When you import the module for the first time, all of the initializations will be done; however, if you try to import the module for the second time, Python will return the initialized module. Thus, the initialization will not be done, and you get a previously imported module with all of its data. So, if you want to quickly make a singleton, use the following steps and keep the shared data as the module attribute. singletone.py: only_one_var = "I'm only one var" module1.py: import single tone print singleton.only_one_var singletone.only_one_var += " after modification" import module2 module2.py: import singletone print singleton.only_one_var Here, if you try to import a global variable in a singleton module and change its value in the module1 module, module2 will get a changed variable. This function is quick and sometimes is all that you need; however, we need to consider the following points: It's pretty error-prone. For example, if you happen to forget the global statements, variables local to the function will be created and, the module's variables won't be changed, which is not what you want. It's ugly, especially if you have a lot of objects that should remain as singletons. They pollute the module namespace with unnecessary variables. They don't permit lazy allocation and initialization; all global variables will be loaded during the module import process. It's not possible to re-use the code because you can not use the inheritance. No special methods and no object-oriented programming benefits at all. Classic singleton In classic singleton in Python, we check whether an instance is already created. If it is created, we return it; otherwise, we create a new instance, assign it to a class attribute, and return it. Let's try to create a dedicated singleton class: class Singleton(object): def __new__(cls): if not hasattr(cls, 'instance'): cls.instance = super(Singleton, cls).__new__(cls) return cls.instance Here, before creating the instance, we check for the special __new__ method, which is called right before __init__ if we had created an instance earlier. If not, we create a new instance; otherwise, we return the already created instance. Let's check how it works: >>> singleton = Singleton() >>> another_singleton = Singleton() >>> singleton is another_singleton True >>> singleton.only_one_var = "I'm only one var" >>> another_singleton.only_one_var I'm only one var Try to subclass the Singleton class with another one. class Child(Singleton): pass If it's a successor of Singleton, all of its instances should also be the instances of Singleton, thus sharing its states. But this doesn't work as illustrated in the following code: >>> child = Child() >>> child is singleton >>> False >>> child.only_one_var AttributeError: Child instance has no attribute 'only_one_var' To avoid this situation, the borg singleton is used. Borg singleton Borg is also known as monostate. In the borg pattern, all of the instances are different, but they share the same state. In the following code , the shared state is maintained in the _shared_state attribute. And all new instances of the Borg class will have this state as defined in the __new__ class method. class Borg(object):    _shared_state = {}    def __new__(cls, *args, **kwargs):        obj = super(Borg, cls).__new__(cls, *args, **kwargs)        obj.__dict__ = cls._shared_state        return obj Generally, Python stores the instance state in the __dict__ dictionary and when instantiated normally, every instance will have its own __dict__. But, here we deliberately assign the class variable _shared_state to all of the created instances. Here is how it works with subclassing: class Child(Borg):    pass>>> borg = Borg()>>> another_borg = Borg()>>> borg is another_borgFalse>>> child = Child()>>> borg.only_one_var = "I'm the only one var">>> child.only_one_varI'm the only one var So, despite the fact that you can't compare objects by their identity, using the is statement, all child objects share the parents' state. If you want to have a class, which is a descendant of the Borg class but has a different state, you can reset shared_state as follows: class AnotherChild(Borg):    _shared_state = {}>>> another_child = AnotherChild()>>> another_child.only_one_varAttributeError: AnotherChild instance has no attribute 'shared_state' Which type of singleton should be used is up to you. If you expect that your singleton will not be inherited, you can choose the classic singleton; otherwise, it's better to stick with borg. Implementation in Python As a practical example, we'll create a simple web crawler that scans a website you open on it, follows all the links that lead to the same website but to other pages, and downloads all of the images it'll find. To do this, we'll need two functions: a function that scans a website for links, which leads to other pages to build a set of pages to visit, and a function that scans a page for images and downloads them. To make it quicker, we'll download images in two threads. These two threads should not interfere with each other, so don't scan pages if another thread has already scanned them, and don't download images that are already downloaded. So, a set with downloaded images and scanned web pages will be a shared resource for our application, and we'll keep it in a singleton instance. In this example, you will need a library for parsing and screen scraping websites named BeautifulSoup and an HTTP client library httplib2. It should be sufficient to install both with either of the following commands: $ sudo pip install BeautifulSoup httplib2 $ sudo easy_install BeautifulSoup httplib2 First of all, we'll create a Singleton class. Let's use the classic singleton in this example: import httplib2import osimport reimport threadingimport urllibfrom urlparse import urlparse, urljoinfrom BeautifulSoup import BeautifulSoupclass Singleton(object):    def __new__(cls):        if not hasattr(cls, 'instance'):             cls.instance = super(Singleton, cls).__new__(cls)        return cls.instance It will return the singleton objects to all parts of the code that request it. Next, we'll create a class for creating a thread. In this thread, we'll download images from the website: class ImageDownloaderThread(threading.Thread):    """A thread for downloading images in parallel."""    def __init__(self, thread_id, name, counter):        threading.Thread.__init__(self)        self.name = name    def run(self):        print 'Starting thread ', self.name        download_images(self.name)        print 'Finished thread ', self.name The following function traverses the website using BFS algorithms, finds links, and adds them to a set for further downloading. We are able to specify the maximum links to follow if the website is too large. def traverse_site(max_links=10):    link_parser_singleton = Singleton()    # While we have pages to parse in queue    while link_parser_singleton.queue_to_parse:        # If collected enough links to download images, return        if len(link_parser_singleton.to_visit) == max_links:            return        url = link_parser_singleton.queue_to_parse.pop()        http = httplib2.Http()        try:            status, response = http.request(url)        except Exception:            continue        # Skip if not a web page        if status.get('content-type') != 'text/html':            continue        # Add the link to queue for downloading images        link_parser_singleton.to_visit.add(url)        print 'Added', url, 'to queue'        bs = BeautifulSoup(response)        for link in BeautifulSoup.findAll(bs, 'a'):            link_url = link.get('href')            # <img> tag may not contain href attribute            if not link_url:                continue            parsed = urlparse(link_url)            # If link follows to external webpage, skip it            if parsed.netloc and parsed.netloc != parsed_root.netloc:                continue            # Construct a full url from a link which can be relative            link_url = (parsed.scheme or parsed_root.scheme) + '://' + (parsed.netloc or parsed_root.netloc) + parsed.path or ''            # If link was added previously, skip it            if link_url in link_parser_singleton.to_visit:                continue            # Add a link for further parsing            link_parser_singleton.queue_to_parse = [link_url] + link_parser_singleton.queue_to_parse The following function downloads images from the last web resource page in the singleton.to_visit queue and saves it to the img directory. Here, we use a singleton for synchronizing shared data, which is a set of pages to visit between two threads: def download_images(thread_name):    singleton = Singleton()    # While we have pages where we have not download images    while singleton.to_visit:        url = singleton.to_visit.pop()        http = httplib2.Http()        print thread_name, 'Starting downloading images from', url        try:            status, response = http.request(url)        except Exception:            continue        bs = BeautifulSoup(response)       # Find all <img> tags        images = BeautifulSoup.findAll(bs, 'img')        for image in images:            # Get image source url which can be absolute or relative            src = image.get('src')            # Construct a full url. If the image url is relative,            # it will be prepended with webpage domain.            # If the image url is absolute, it will remain as is            src = urljoin(url, src)            # Get a base name, for example 'image.png' to name file locally            basename = os.path.basename(src)            if src not in singleton.downloaded:                singleton.downloaded.add(src)                print 'Downloading', src                # Download image to local filesystem                urllib.urlretrieve(src, os.path.join('images', basename))        print thread_name, 'finished downloading images from', url Our client code is as follows: if __name__ == '__main__':    root = 'http://python.org'    parsed_root = urlparse(root)    singleton = Singleton()    singleton.queue_to_parse = [root]    # A set of urls to download images from    singleton.to_visit = set()    # Downloaded images    singleton.downloaded = set()    traverse_site()    # Create images directory if not exists    if not os.path.exists('images'):        os.makedirs('images')    # Create new threads    thread1 = ImageDownloaderThread(1, "Thread-1", 1)    thread2 = ImageDownloaderThread(2, "Thread-2", 2)    # Start new Threads    thread1.start()    thread2.start() Run a crawler using the following command: $ python crawler.py You should get the following output (your output may vary because the order in which the threads access resources is not predictable): If you go to the images directory, you will find the downloaded images there. Summary To learn more about design patterns in depth, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning Python Design Patterns – Second Edition (https://www.packtpub.com/application-development/learning-python-design-patterns-second-edition) Mastering Python Design Patterns (https://www.packtpub.com/application-development/mastering-python-design-patterns) Resources for Article: Further resources on this subject: Python Design Patterns in Depth: The Factory Pattern [Article] Recommending Movies at Scale (Python) [Article] Customizing IPython [Article]
Read more
  • 0
  • 0
  • 43802

article-image-how-to-navigate-files-in-a-vue-app-using-the-dropbox-api
Pravin Dhandre
05 Jun 2018
13 min read
Save for later

How to navigate files in a Vue app using the Dropbox API

Pravin Dhandre
05 Jun 2018
13 min read
Dropbox API is a set of HTTP endpoints that help apps to integrate with Dropbox easily. It allows developers to work with files present in Dropbox and provides access to advanced functionality like full-text search, thumbnails, and sharing. In this article we will discuss Dropbox API functionalities listed below to navigate and query your files and folders: Loading and querying the Dropbox API Listing the directories and files from your Dropbox account Adding a loading state to your app Using Vue animations To get started you will need a Dropbox account and follow the steps detailed in this article. If you don't have one, sign up and add a few dummy files and folders. The content of Dropbox doesn't matter, but having folders to navigate through will help with understanding the code. This article is an excerpt from a book written by Mike Street titled Vue.js 2.x by Example.  This book puts Vue.js into a real-world context, guiding you through example projects to help you build Vue.js applications from scratch. Getting started—loading the libraries Create a new HTML page for your app to run in. Create the HTML structure required for a web page and include your app view wrapper: It's called #app here, but call it whatever you want - just remember to update the JavaScript. As our app code is going to get quite chunky, make a separate JavaScript file and include it at the bottom of the document. You will also need to include Vue and the Dropbox API SDK. As with before, you can either reference the remote files or download a local copy of the library files. Download a local copy for both speed and compatibility reasons. Include your three JavaScript files at the bottom of your HTML file: Create your app.js and initialize a new Vue instance, using the el tag to mount the instance onto the ID in your view. Creating a Dropbox app and initializing the SDK Before we interact with the Vue instance, we need to connect to the Dropbox API through the SDK. This is done via an API key that is generated by Dropbox itself to keep track of what is connecting to your account and where Dropbox requires you to make a custom Dropbox app. Head to the Dropbox developers area and select Create your app. Choose Dropbox API and select either a restricted folder or full access. This depends on your needs, but for testing, choose Full Dropbox. Give your app a name and click the button Create app. Generate an access token to your app. To do so, when viewing the app details page, click the Generate button under the Generated access token. This will give you a long string of numbers and letters - copy and paste that into your editor and store it as a variable at the top of your JavaScript. In this the API key will be referred to as XXXX: Now that we have our API key, we can access the files and folders from our Dropbox. Initialize the API and pass in your accessToken variable to the accessToken property of the Dropbox API: We now have access to Dropbox via the dbx variable. We can verify our connection to Dropbox is working by connecting and outputting the contents of the root path: This code uses JavaScript promises, which are a way of adding actions to code without requiring callback functions. Take a note of the first line, particularly the path variable. This lets us pass in a folder path to list the files and folders within that directory. For example, if you had a folder called images in your Dropbox, you could change the parameter value to /images and the file list returned would be the files and folders within that directory. Open your JavaScript console and check the output; you should get an array containing several objects - one for each file or folder in the root of your Dropbox. Displaying your data and using Vue to get it Now that we can retrieve our data using the Dropbox API, it's time to retrieve it within our Vue instance and display in our view. This app is going to be entirely built using components so we can take advantage of the compartmentalized data and methods. It will also mean the code is modular and shareable, should you want to integrate into other apps. We are also going to take advantage of the native Vue created() function - we'll cover it when it gets triggered in a bit. Create the component First off, create your custom HTML element, <dropbox-viewer>, in your View. Create a <script> template block at the bottom of the page for our HTML layout: Initialize your component in your app.js file, pointing it to the template ID: Viewing the app in the browser should show the heading from the template. The next step is to integrate the Dropbox API into the component. Retrieve the Dropbox data Create a new method called dropbox. In there, move the code that calls the Dropbox class and returns the instance. This will now give us access to the Dropbox API through the component by calling this.dropbox(): We are also going to integrate our API key into the component. Create a data function that returns an object containing your access token. Update the Dropbox method to use the local version of the key: We now need to add the ability for the component to get the directory list. For this, we are going to create another method that takes a single parameter—the path. This will give us the ability later to request the structure of a different path or folder if required. Use the code provided earlier - changing the dbx variable to this.dropbox(): Update the Dropbox filesListFolder function to accept the path parameter passed in, rather than a fixed value. Running this app in the browser will show the Dropbox heading, but won't retrieve any folders because the methods have not been called yet. The Vue life cycle hooks This is where the created() function comes in. The created() function gets called once the Vue instance has initialized the data and methods, but has yet to mount the instance on the HTML component. There are several other functions available at various points in the life cycle; more about these can be read at Alligator.io. The life cycle is as follows: Using the created() function gives us access to the methods and data while being able to start our retrieval process as Vue is mounting the app. The time between these various stages is split-second, but every moment counts when it comes to performance and creating a quick app. There is no point waiting for the app to be fully mounted before processing data if we can start the task early. Create the created() function on your component and call the getFolderStructure method, passing in an empty string for the path to get the root of your Dropbox: Running the app now in your browser will output the folder list to your console, which should give the same result as before. We now need to display our list of files in the view. To do this, we are going to create an empty array in our component and populate it with the result of our Dropbox query. This has the advantage of giving Vue a variable to loop through in the view, even before it has any content. Displaying the Dropbox data Create a new property in your data object titled structure, and assign this to an empty array. In the response function of the folder retrieval, assign response.entries to this.structure. Leave console.log as we will need to inspect the entries to work out what to output in our template: We can now update our view to display the folders and files from your Dropbox. As the structure array is available in our view, create a <ul> with a repeatable <li> looping through the structure. As we are now adding a second element, Vue requires templates to have one containing the element, wrap your heading and list in a <div>: Viewing the app in the browser will show a number of empty bullet points when the array appears in the JavaScript console. To work out what fields and properties you can display, expand the array in the JavaScript console and then further for each object. You should notice that each object has a collection of similar properties and a few that vary between folders and files. The first property, .tag, helps us identify whether the item is a file or a folder. Both types then have the following properties in common: id: A unique identifier to Dropbox name: The name of the file or folder, irrespective of where the item is path_display: The full path of the item with the case matching that of the files and folders path_lower: Same as path_display but all lowercase Items with a .tag of a file also contain several more fields for us to display: client_modified: This is the date when the file was added to Dropbox. content_hash: A hash of the file, used for identifying whether it is different from a local or remote copy. More can be read about this on the Dropbox website. rev: A unique identifier of the version of the file. server_modified: The last time the file was modified on Dropbox. size: The size of the file in bytes. To begin with, we are going to display the name of the item and the size, if present. Update the list item to show these properties: More file meta information To make our file and folder view a bit more useful, we can add more rich content and metadata to files such as images. These details are available by enabling the include_media_info option in the Dropbox API. Head back to your getFolderStructure method and add the parameter after path. Here are some new lines of readability: Inspecting the results from this new API call will reveal the media_info key for videos and images. Expanding this will reveal several more pieces of information about the file, for example, dimensions. If you want to add these, you will need to check that the media_info object exists before displaying the information: Try updating the path when retrieving the data from Dropbox. For example, if you have a folder called images, change the this.getFolderStructure parameter to /images. If you're not sure what the path is, analyze the data in the JavaScript console and copy the value of the path_lower attribute of a folder, for example: Formatting the file sizes With the file size being output in plain bytes it can be quite hard for a user to dechiper. To combat this, we can add a formatting method to output a file size which is more user-friendly, for example displaying 1kb instead of 1024. First, create a new key on the data object that contains an array of units called byteSizes: This is what will get appended to the figure, so feel free to make these properties either lowercase or full words, for example, megabyte. Next, add a new method, bytesToSize, to your component. This will take one parameter of bytes and output a formatted string with the unit at the end: We can now utilize this method in our view: Add loading screen The last step of this tutorial is to make a loading screen for our app. This will tell the user the app is loading, should the Dropbox API be running slowly (or you have a lot of data to show!). The theory behind this loading screen is fairly basic. We will set a loading variable to true by default that then gets set to false once the data has loaded. Based on the result of this variable, we will utilize view attributes to show, and then hide, an element with the loading text or animation in and also reveal the loaded data list. Create a new key in the data object titled isLoading. Set this variable to true by default: Within the getFolderStructure method on your component, set the isLoading variable to false. This should happen within the promise after you have set the structure: We can now utilize this variable in our view to show and hide a loading container. Create a new <div> before the unordered list containing some loading text. Feel free to add a CSS animation or an animated gif—anything to let the user know the app is retrieving data: We now need to only show the loading div if the app is loading and the list once the data has loaded. As this is just one change to the DOM, we can use the v-if directive. To give you the freedom of rearranging the HTML, add the attribute to both instead of using v-else. To show or hide, we just need to check the status of the isLoading variable. We can prepend an exclamation mark to the list to only show if the app is not loading: Our app should now show the loading container once mounted, and then it should show the list once the app data has been gathered. To recap, our complete component code now looks like this: Animating between states As a nice enhancement for the user, we can add some transitions between components and states. Helpfully, Vue includes some built-in transition effects. Working with CSS, these transitions allow you to add fades, swipes, and other CSS animations easily when DOM elements are being inserted. More information about transitions can be found in the Vue documentation. The first step is to add the Vue custom HTML <transition> element. Wrap both your loading and list with separate transition elements and give it an attribute of name and a value of fade: Now add the following CSS to either the head of your document or a separate style sheet if you already have one: With the transition element, Vue adds and removes various CSS classes based on the state and time of the transition. All of these begin with the name passed in via the attribute and are appended with the current stage of transition: Try the app in your browser, you should notice the loading container fading out and the file list fading in. Although in this basic example, the list jumps up once the fading has completed, it's an example to help you understand using transitions in Vue. We learned to make Dropbox viewer to list out files and folders from the Dropbox account and also used Vue animations for navigation. Do check out the book Vue.js 2.x by Example to start integrating remote data into your web applications swiftly. Building a real-time dashboard with Meteor and Vue.js Building your first Vue.js 2 Web application Installing and Using Vue.js
Read more
  • 0
  • 0
  • 43770
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-opening-openid-spring-security
Packt
27 May 2010
7 min read
Save for later

Opening up to OpenID with Spring Security

Packt
27 May 2010
7 min read
(For more resources on Spring, see here.) The promising world of OpenID The promise of OpenID as a technology is to allow users on the web to centralize their personal data and information with a trusted provider, and then use the trusted provider as a delegate to establish trustworthiness with other sites with whom the user wants to interact. In concept, this type of login through a trusted third party has been in existence for a long time, in many different forms (Microsoft Passport, for example, became one of the more notable central login services on the web for some time). OpenID's distinct advantage is that the OpenID Provider needs to implement only the public OpenID protocol to be compatible with any site seeking to integrate login with OpenID. The OpenID specification itself is an open specification, which leads to the fact that there is currently a diverse population of public providers up and running the same protocol. This is an excellent recipe for healthy competition and it is good for consumer choice. The following diagram illustrates the high-level relationship between a site integrating OpenID during the login process and OpenID providers. We can see that the user presents his credentials in the form of a unique named identifier, typically a Uniform Resource Identifier (URI), which is assigned to the user by their OpenID provider, and is used to uniquely identify both the user and the OpenID provider. This is commonly done by either prepending a subdomain to the URI of the OpenID provider (for example, https://jamesgosling.myopenid.com/), or appending a unique identifier to the URI of the OpenID provider URI (for example, https://me.yahoo.com/jamesgosling). We can visually see from the presented URI that both methods clearly identify both the OpenID provider(via domain name) and the unique user identifier. Don't trust OpenID unequivocally! You can see here a fundamental assumption that can fool users of the system. It is possible for us to sign up for an OpenID, which would make it appear as though we were James Gosling, even though we obviously are not. Do not make the false assumption that just because a user has a convincing-sounding OpenID (or OpenID delegate provider) they are the authentic person, without requiring additional forms of identification. Thinking about it another way, if someone came to your door just claiming he was James Gosling, would you let him in without verifying his ID? The OpenID-enabled application then redirects the user to the OpenID provider, at which the user presents his credentials to the provider, which is then responsible for making an access decision. Once the access decision has been made by the provider, the provider redirects the user to the originating site, which is now assured of the user's authenticity. OpenID is much easier to understand once you have tried it. Let's add OpenID to the JBCP Pets login screen now! Signing up for an OpenID In order to get the full value of exercises in this section (and to be able to test login), you'll need your own OpenID from one of the many available providers, of which a partial listing is available at http://openid.net/get-an-openid/. Common OpenID providers with which you probably already have an account are Yahoo!, AOL, Flickr, or MySpace. Google's OpenID support is slightly different, as we'll see later in this article when we add Sign In with Google support to our login page. To get full value out of the exercises in this article, we recommend you have accounts with at least: myOpenID Google Enabling OpenID authentication with Spring Security Spring Security provides convenient wrappers around provider integrations that are actually developed outside the Spring ecosystem. In this vein, the openid4java project ( http://code.google.com/p/openid4java/) provides the underlying OpenID provider discovery and request/response negotiation for the Spring Security OpenID functionality. Writing an OpenID login form It's typically the case that a site will present both standard (username and password) and OpenID login options on a single login page, allowing the user to select from one or the other option, as we can see in the JBCP Pets target login page. The code for the OpenID-based form is as follows: <h1>Or, Log Into Your Account with OpenID</h1> <p> Please use the form below to log into your account with OpenID. </p> <form action="j_spring_openid_security_check" method="post"> <label for="openid_identifier">Login</label>: <input id="openid_identifier" name="openid_identifier" size="20" maxlength="100" type="text"/> <img src="images/openid.png" alt="OpenID"/> <br /> <input type="submit" value="Login"/> </form> The name of the form field, openid_identifier, is not a coincidence. The OpenID specification recommends that implementing websites use this name for their OpenID login field, so that user agents (browsers) have the semantic knowledge of the function of this field. There are even browser plug-ins such as Verisign's OpenID SeatBelt ( https://pip.verisignlabs.com/seatbelt.do), which take advantage of this knowledge to pre-populate your OpenID credentials into any recognizable OpenID field on a page. You'll note that we don't offer the remember me option with OpenID login. This is due to the fact that the redirection to and from the vendor causes the remember me checkbox value to be lost, such that when the user's successfully authenticated, they no longer have the remember me option indicated. This is unfortunate, but ultimately increases the security of OpenID as a login mechanism for our site, as OpenID forces the user to establish a trust relationship through the provider with each and every login. Configuring OpenID support in Spring Security Turning on basic OpenID support, via the inclusion of a servlet filter and authentication provider, is as simple as adding a directive to our <http> configuration element in dogstore-security.xml as follows:/   <http auto-config="true" ...> <!-- Omitting content... --> <openid-login/> </http> After adding this configuration element and restarting the application, you will be able to use the OpenID login form to present an OpenID and navigate through the OpenID authentication process. When you are returned to JBCP Pets, however, you will be denied access. This is because your credentials won’t have any roles assigned to them. We’ll take care of this next. Adding OpenID users As we do not yet have OpenID-enabled new user registration, we'll need to manually insert the user account (that we'll be testing) into the database, by adding them to test-users-groups-data.sql in our database bootstrap code. We recommend that you use myOpenID for this step (notably, you will have trouble with Yahoo!, for reasons we'll explain in a moment). If we assume that our OpenID is https://jamesgosling.myopenid.com/, then the SQL that we'd insert in this file is as follows: insert into users(username, password, enabled, salt) values ('https:// jamesgosling.myopenid.com/','unused',true,CAST(RAND()*1000000000 AS varchar)); insert into group_members(group_id, username) select id,'https:// jamesgosling.myopenid.com/' from groups where group_ name='Administrators'; You'll note that this is similar to the other data that we inserted for our traditional username-and password-based admin account, with the exception that we have the value unused for the password. We do this, of course, because OpenID-based login doesn't require that our site should store a password on behalf of the user! The observant reader will note, however, that this does not allow a user to create an arbitrary username and password, and associate it with an OpenID—we describe this process briefly later in this article, and you are welcome to explore how to do this as an advanced application of this technology. At this point, you should be able to complete a full login using OpenID. The sequence of redirects is illustrated with arrows in the following screenshot: We've now OpenID-enabled JBCP Pets login! Feel free to test using several OpenID providers. You'll notice that, although the overall functionality is the same, the experience that the provider offers when reviewing and accepting the OpenID request differs greatly from provider to provider.
Read more
  • 0
  • 2
  • 43643

article-image-5-barriers-to-learning-and-technology-training-for-small-software-development-teams
Richard Gall
30 Aug 2019
9 min read
Save for later

5 barriers to learning and technology training for small software development teams

Richard Gall
30 Aug 2019
9 min read
Managing, supporting, and facilitating training for software developers isn’t easy. Such is the pace of change that it can be tough to ensure that not only do your developers have the skills they need, but also that they have the resources at their disposal to explore new technologies, try new approaches and solve complex problems. Okay, sure: there are a wealth of free resources out there. But using these is really just learning to walk for many developers. It’s essential, but it can’t be the only thing developers have at their disposal. If it is, their company is doing them a disservice. Even if a company is committed to their team’s development, how to go about it still isn’t straightforward. There are a number of barriers that need to be overcome when it comes to technology training and learning. Here are 5 of them... Barrier 1: Picking resources on the right topics The days of developers specializing are long gone - particularly in a small tech team where everyone has to get stuck in. Full-stack has ensured that today’s software developers need to be confident on everything from databases to UI design, while DevOps means they need to assume responsibility for how their code actually runs in production. The days of it worked on my machine! are over. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin When you factor in cloud native and hybrid cloud, developers today might well not only be working on cloud platforms with a bewildering array of features and opportunities, but also working on a number of different platforms at once. This means that understanding exactly what your developers need to know can be immensely difficult. It also means that you have to set the technology agenda from the off, immediately limiting your developers ability to solve problems themselves. That’s only going to alienate your developers - you’re effectively telling them that their curiosity, and their desire to explore new topics and alternative solutions is pointless. What can you do? The only way to solve this is to ensure your developers have a rich set of learning resources. Don’t limit them to a specific tool or set of technologies. Allow them to explore new topics as they see fit - don’t force them down a pre-set path. The other benefit of this is that it means you can be flexible and open in terms of the technology you use. Barrier 2: Legacy software and brownfield projects Sometimes, however, you can’t be flexible about the technology you use. Your developers simply need to use existing tools to manage legacy systems and brownfield projects (which build on or demolish existing legacy software). This means that learning needs can become complex. Not only might your development team needs a diverse range of skills for one particular project, they might also need to learn very different skill sets for different projects. Maybe you’re planning a new project built in React.js - great, get your developers on some cutting-edge content. But then, alongside this, what if they also need to tackle legacy software built using PHP? It’s fine if they know PHP, but what if you have a new developer? Or what if they haven’t used it for years? What can you do? As above, a rich set of training and learning resources is vital. But in this instance it’s particularly important to ensure developers have access to learning materials on a wide range of technologies. So yes, cutting-edge topic coverage is essential, but so is content on established and even more residual technologies. Barrier 3: Lack of time for learning new software skills and concepts Lack of time is one of the biggest barriers to learning and training in the tech industry. At least that’s the perception - in reality, a lack of time to learn is caused by resource challenges, and the cultural status of learning inside an organization. If it isn’t a priority, then it’s often the thing that gets pushed back as new day to day activities manage to insert themselves in your team’s day or existing ones stretch out to fill the hours in a way that no one expected. This is risky. Although there are times when things simply need to get done quickly, overlooking learning will not only lead to disengagement in your team, it will also leave them unprepared for challenges that may emerge throughout the course of a project. It’s often said that software development is all about solving problems - but how confident can we be that we’re equipped to solve them if we’re not committed to making time for learning? What can you do? The first step is obvious: make time for learning. One method is to set aside a specific period in the week which your team is encouraged to use for personal development. While that can work well, sometimes trying to structure learning in this way can make it feel like a chore for employees, as if you’re trying to fit them into a predetermined routine. Another option is to simply encourage continuous learning. That might mean a short period every day just to learn a new concept or approach. Or it could be something like a learning diary that allows developers to record things they learn and plan what they want to learn next. Essentially, what’s important is putting your developers in control. Give them the tools to make time - as well as the resources that they can use quickly and easily, and they’ll not only learn new skills much more quickly, they’ll also be more engaged and more curious engineers. Read next: “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] Barrier 4: Different learning preferences within the team In even a small engineering team, every person will have unique needs and preferences when it comes to learning. So, even if all your team wants to learn the same topics, they still might disagree about how they want to do it. Some developers, for example, can’t stand learning with video. It forces you to learn at its pace, and - shock horror - you have to listen to someone. Others, however, love it - it’s visual, immediate and, you know, maybe having someone with a voice explaining how things work isn’t actually that bad? Similarly, some people love training courses - the idea of sitting with someone, rather than going it alone - while others value their independence and agency. This means that keeping everyone happy can be tough. It might be tempting to go one of two ways - either decide how you want people to learn and let them get on with it, or let everyone go it alone, but neither approach is ideal. Forcing people to learn a certain way will penalise those who don’t like your preferred learning method, hurting not only their ability to learn new skills but also their trust with you. Letting everyone be independent, meanwhile, means you never have any oversight on what people are doing or using - there’s no level playing field. What can you do? The key to getting over this barrier is balance. You want to preserve your team's independence and sense of agency while also having some transparency over what people are using. Resources that offer a mix of formats are great for this. That way, no one is forced to watch hours and hours of video courses or to trawl through text that will just send them to sleep. Equally, another important thing to think about is how the learning resources you provide your team complement other learning activities that they may be doing independently (things like looking for answers to questions in blog posts or on YouTube). By doing that you can allow your developers some level of independence while also ensuring that you have a set of organizational resources and tools that are available and accessible to everyone. Read next: Why do IT teams need to transition from DevOps to DevSecOps? Barrier 5: The cost of technology training and learning resources Cost is perhaps the biggest barrier to training for development teams. Specific courses and events can be astronomical - especially if more than one person needs to attend - and even some learning platforms can cost significant amounts of money. Now, that’s not always a problem for engineering teams operating in a more corporate or enterprise environment. But for smaller teams in small and medium sized businesses, training can become quite an overhead. In turn, this can have other consequences - from a complete disregard and deprioritization of training and learning to internal frustration at who is getting support and investment and who isn’t, it’s not uncommon to see training become a catalyst for many other organizational and cultural problems. What can you do? The trick here is to not invest heavily in one single thing. Don’t blow your budget on a single course - what if it’s not up to scratch? Don’t commit to an expensive learning platform. However good the marketing looks, if your team doesn't like it they’re certainly not going to use it. Fortunately, there are affordable solutions out there that can ensure you’re not breaking the bank when it comes to training. It might even leave you with some cash left over that you can invest on other resources and materials. Learning new skills isn’t easy. It requires patience and commitment. But developers need support and resources to take some of the strain out of their learning challenges. There’s no one way to meet the learning needs of developers. But Packt for Teams can help you overcome many of the barriers of developer training. Learn more here.
Read more
  • 0
  • 0
  • 43588

article-image-theres-more-to-learning-programming-than-just-writing-code
Richard Gall
15 Nov 2019
8 min read
Save for later

There's more to learning programming than just writing code

Richard Gall
15 Nov 2019
8 min read
Everyone should learn to code, right? If everyone learned programming not only would people have better jobs, the economy would be growing, and ultimately we’d all have far superior lives to the ones we lead now. Except - clearly - that’s just not true. Yes, perhaps that position is a bit of a caricature, but it’s one that isn’t that uncommon. Lawmakers talk about the importance of making programming and coding part of the curriculum, and are keen to make loud and enthusiastic noises about investing in STEM subjects. We need more engineers to power the digital economy, the thinking goes. While introducing children to code certainly isn’t a bad thing, this way of viewing the world is pretty damaging - not least to those already in engineering roles and those organizations that depend on them. This is because is reduces the activity of writing code to something simple. It turns programming, a complex and ultimately deeply human activity into something more machine like. It almost suggests it’s just a question of writing letters and numbers into a code editor and then just watching the whole thing run. Programming might involve working with machines, but in truth its anything but machine-like. For business leaders, failing to understand what programming actually involves can lead to a really poor engineering culture. A reductive view of the work that software engineers do mean increased pressure, more burnout and lower quality software being delivered. In turn, that has a negative impact on the bottom line. It might not be immediately apparent but poor software means code rewrites, poor user experiences, and high turnover of personnel. That costs money because organizations will be spending valuable time and energy trying to fix mistakes of the past. We need to keep an open mind about what it means to "learn programming" However, with a more open minded perspective on what it actually means to be a programmer, and what ‘learning programming’ actually means, you can build a much more productive engineering culture. This involves not only respecting the learning process, but also recognising that learning isn’t about just taking a course or doing a live coding exercise. It does, in fact, involve a much more diverse range of activities. Let’s look at what some of them are. Evaluating software One of the most important parts of a software developers work is evaluating software. This can happen in various ways. Most obviously, technology leaders (CTOs, Principal Architects, Development Leads) have to evaluate different tools and platforms before they implement a project. Questions here will revolve primarily around cost, but it certainly won't be the leadership team's sole concern. Other issues like integration, product capabilities, even the learning curve and level of complexity will need to be considered (will we need to hire specialist engineers or can our existing team pick it up quickly?). Perhaps that all sounds obvious, but too often we forget that this is work that needs to be done. To make these sorts of assessments - which are often business critical - individuals will need a high degree of knowledge. Without it they can’t be confident that they’re making the right decision for the business. In this sense then, learning about technologies is just as important as the process of learning how to use technologies. Some might say it’s even more important. It's not only senior developers and tech leaders that evaluate software Evaluating software is by no means a task limited to those in senior positions. Developers and engineers who spend the majority of their time shipping code will still need to learn about technologies too. They might not be responsible for architecting a new software system or purchasing PaaS products, but they will have to make personal decisions about what tools they use to solve specific problems. This might sometimes be about the tools they use to boost their productivity and better manage their development workflow, but it isn't limited to that. In broad terms it’s about having an open mind about the range of approaches that can be taken to new challenges. This means that all technology professionals need to learn about technologies - how they work, how they compare to one another, and even what the trade offs between them are. This shouldn’t be treated as an additional extra, but instead as a fundamental part of the learning process. Read next: Developers are today’s technology decision makers Programming techniques and design principles When talking about learning it’s easy to fall into a trap where we privilege practice over theory. Theory, certain lines of thinking go, is self-indulgent, unnecessary, and time-consuming. What’s really important is that people can simply start getting their hands dirty and learn by doing. While it’s true that the practical dimension of learning is vital - in technology or any other field - we overlook theory at our peril. In reality, theory and practice should go together. Practice should be a way of illuminating the theory and theory should be a way of explaining why something works the way it does, or why you should do something in a certain way. Think of it this way: if everyone only learned through practice, we’d all be incapable of applying our skills and knowledge to new problems and challenges. We’d be fixed in our mindset, more like machines than creative human beings. For developers and software engineers this is particularly true. By understanding the principles behind how something works, it becomes much easier to apply solutions to new contexts or even reconfigure them in ways that are appropriate and effective. Improving software with design-led principles Programming techniques and philosophies, like functional or object oriented programming, for example, can help developers and engineers to write code in a specific way, helping them to unlock greater performance and efficiency (both personally and from a technical perspective). Similarly, design patterns also provide a way for thinking about your code in a predetermined way in relation to various commonly occurring problems. It’s true that this still requires developers to get close to code. But this is actually a level of abstraction above the practice of writing code that allows developers to think critically about what they do. So, while a good way to learn these sorts of principles is to see what it looks like in practice, it’s still essential for developers to have a robust conceptual understanding of what this means in practice. Understanding users and business needs Software doesn’t exist in a vacuum. On one side there's the business, on the other there's a user. It sounds obvious, but it's essential that technology professionals are sensitive to these two contextual elements. Business needs and user needs are what ultimately make their work meaningful. In practice, this doesn’t mean people working in technology all need to go and take an MBA. But they do need to have a clear conceptual understanding of how software development and software systems should align with the needs of both internal stakeholders (ie. the business), and users. This isn’t always easy to learn, and there’s no manual for how it should be done. However, it fits across the two points we mentioned above. The software we decide to use, and the way we decide to use it will always be informed by the needs of both the business and users. What this means in practice, then, is that learning about software needs to be informed by the wider context of what that software is for, and what a business is trying to achieve. Some technology professionals enter the industry possessing this kind of awareness and sensitivity. Many others, however, do not, and for these people it’s essential that they have the space to understand how the various facets of the work they do are connected to real-life consequences. Writing code doesn’t help you to do that. Taking a step back and understanding the context in which that code is being written can and will. Read next: 6 reasons why employers should pay for their developers’ training and learning resources Conclusion: Great programming requires a combination of theoretical knowledge and practical talent The opposition between theory and practice is false. It doesn’t help anyone. A culture of ‘getting stuff done’ and shipping code regardless is not only bad for individual developers, it can also be damaging at an organizational level. Without a careful consideration of what you’re trying to achieve, how software can help you to do it, and what it requires to execute it effectively, organizations can become prone to error and mistakes. This leads to wasted time and, more importantly, wasted money. While Facebook’s mantra of ‘move fast and break things’ might sound like the defining phrase of the modern tech industry, good developers need both space and resources to think, plan, and conceptualize. This doesn’t mean we all need to go slow. Instead, it means we need to try to empower engineers to do the right thing, not the quick thing. Give your team access to a diverse range of resources to learn everything they need to build better software. Start a Packt for Teams subscription today.
Read more
  • 0
  • 0
  • 43482

article-image-setting-up-polars-for-data-analysis
Luca Zanna
23 Feb 2024
7 min read
Save for later

Setting Up Polars for Data Analysis

Luca Zanna
23 Feb 2024
7 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, Data Analysis with Polars, by Luca Zanna. Leverage Polars, the lightning-fast dataframe library, to take your Python data analysis skills to the next levelIntroductionIn the ever-evolving landscape of data analysis, harnessing the right tools and methodologies can make all the difference. Welcome to a world where Polars, a powerful data manipulation library, takes center stage. This article is your gateway to unlocking the potential of Polars, and it begins by unraveling the essential components of the data analysis journey. From setting up virtual environments to simplifying data analysis in the cloud with Google Colab, we explore how Polars streamlines your path to insights. Whether you're a seasoned data analyst or just starting your journey, this guide will equip you with the knowledge and tools needed to make your data analysis endeavors efficient and rewarding. Join us as we delve into the fascinating realm of Polars and embrace a new era of data exploration.Installation and virtual environments We will not go through the installation of Python as that is outside the scope of the book. A visit to python.org will give all the information necessary to install Python. Now on to virtual environments. Understanding Virtual Environments and Their Benefits Imagine you have built a fantastic data analysis project using Polars. Your project uses: Python 3.8Polars version 0.15.1 Numpy 1.23.0 Now, you start a new project, and you want to use a newer Polars (0.16.14), along with Numpy and Arrow. So, the new project requires: Python 3.10 Polars 0.16.14 Numpy 1.24.0 Pyarrow 11.0.0 Upgrading Polars and Numpy libraries globally isn't a good idea. If Polars functions have changed between versions, your first project might stop working or give incorrect results with the new version. This is where virtual environments come in. Virtual environments create separate 'spaces' for each project: one for your first data analysis project and another for your new data pipeline project. You can set up a virtual environment manually or have your IDE set-up a virtual environment for you. If you decide to set it up manually, you can check out the guide at https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment. Installing and using Polars on a machine To install Polars, first make sure you are in a virtual environment. Then, type: pip install polars If you already have Polars installed and want to upgrade it, type: pip install polars --upgrade In the book we will use other libraries, including numpy, pandas, matplotlib. You can install them with the syntax above, and you can also install multiple libraries at the same tine:  pip install numpy pandas matplotlib Let’s now get our development environment set-up. We will use Visual Studio code, but you are free to use any other IDE that you like. 1. Type code . in the command line to open Visual Studio Code. 2. Right-click on the left, choose New File, and create first_dataframe.ipynb.                                                                    Figure – Creating a new file in Visual Studio Code Files with extension .ipynb are Jupyter Notebook files, which are great for data analysis. To work with these files you need to install the Jupyter extension on VS Code. You can do that by clicking on ‘Extensions’ on the left bar, searching for Jupyter, installing it, and activating it.  Figure – Install Jupyter extension in Visual Studio Code 3. Now back to our file. The first thing to ensure is that we are using Python from our virtual environment. Click on Select Kernel at the top right, then click on the Python that starts with env/: that will be the Python for our virtual environment. Avoid the paths starting with /usr and /bin as those are the system Python instead of our virtual environment.  Figure – Select the Python interpreter in Visual Studio code Now, we're ready for Polars. 4. Type import polars as pl in the first cell and press Shift + Enter to run it. 5. Create a dataframe in the next cell by typing: df = pl.DataFrame({    'a': ['Hello', 'World!'] }) 6. Press Shift + Enter to run the cell. This creates a dataframe called df with one column named 'a' and two rows: 'Hello' and 'World!' To see the dataframe, type df in the next cell and run it.  Figure – Visual Studio code with first Polars dataframe We created our first Polars dataframe. Using Polars on the cloud with Google Colab Instead of installing Polars on your computer, you can also use it in the cloud. One popular cloud service for running code is Google Colab. This way, you don't need to install anything on your machine. To access Google Colab, visit https://colab.research.google.com/ in your web browser. Click on "New Notebook," and you'll see a page that looks similar to VS Code. Now, let's create the same Polars dataframe example in Google Colab: 1. In the first cell, type the following command to ensure we have the latest version of Polars: %pip install polars --upgrade 2. Next, enter this code to import Polars and create a dataframe: import polars as pl df = pl.DataFrame({    'a': ['Hello', 'World !'] }) Finally, display the dataframe by typing: df And that's it! You now have your first Polars dataframe in Google Colab.                                                                     Figure – Google Colab with first Polars dataframe ConclusionIn closing, Polars offers a bridge to the future of data analysis. With the knowledge and hands-on experience gained from this article, you're well-prepared to conquer the intricacies of data manipulation and visualization. The ability to effortlessly create, manipulate, and analyze data using Polars is a powerful tool in your arsenal. Whether you're a data enthusiast or a seasoned analyst, embracing Polars sets you on a path toward efficiency, precision, and data-driven success. As the data landscape continues to evolve, you're now equipped to stay ahead, make informed decisions, and revolutionize your approach to data exploration.Author BioLuca Zanna is a Data Engineer and Data Analyst with over 15 years of experience. He started his career as a financial data analyst after a Master's in Management and passing the Certified Public Accountant (CPA) exam. Luca spent a decade working on financial analysis systems at L’Oréal: developing the systems and training financial analysts across Europe and Asia.Currently, Luca helps companies with building data infrastructure to better leverage their data. Luca is also a corporate teacher for topics such as data analysis, SQL, Python, and cloud data engineering.
Read more
  • 0
  • 0
  • 43446
article-image-responsive-web-design
Packt
03 Aug 2016
32 min read
Save for later

What is Responsive Web Design

Packt
03 Aug 2016
32 min read
In this article by Alex Libby, Gaurav Gupta, and Asoj Talesra, the authors of the book, Responsive Web Design with HTML5 and CSS3 Essentials we will cover the basic elements of responsive web design (RWD). Getting started with Responsive Web Design If one had to describe Responsive Web Design in a sentence, then responsive design describes how the content is displayed across various screens and devices, such as mobiles, tablets, phablets or desktops. To understand what this means, let's use water as an example. The property of water is that it takes the shape of the container in which it is poured. It is an approach in which a website or a webpage adjusts the layout according to the size or resolution of the screen dynamically. This ensures that the users get the best experience while using the website. We develop a single website that uses a single code base. This will contain fluid, flexible images, proportion-based grids, fluid images or videos and CSS3 media queries to work across multiple devices and device resolutions—the key to making them work is the use of percentage values in place of fixed units, such as pixels or ems-based sizes. The best part of this is that we can use this technique without the knowledge or need of server based/backend solutions—to see it in action, we can use Packt's website as an example. Go ahead and browse to https://www.packtpub.com/web-development/mastering-html5-forms; this is what we will see as a desktop view: The mobile view for the same website shows this if viewed on a smaller device: We can clearly see the same core content is being displayed (that is, an image of the book, the buy button, pricing details and information about the book), but element such as the menu have been transformed into a single drop down located in the top left corner. This is what responsive web design is all about—producing a flexible design that adapts according to which device we choose to use in a format that suits the device being used. Understanding the elements of RWD Now that we've been introduced to RWD, it's important to understand some of the elements that make up the philosophy of what we know as flexible design. A key part of this is understanding the viewport or visible screen estate available to us—in addition, there are several key elements that make up RWD. There are several key elements involved—in addition to viewports, these center around viewports, flexible media, responsive text and grids, and media queries. We will cover each in more detail later in the book, but for now, let's have a quick overview of the elements that make up RWD. Controlling the viewport A key part of RWD is working with the viewport, or visible content area on a device. If we're working with desktops, then it is usually the resolution; this is not the case for mobile devices. There is a temptation to reach for JavaScript (or a library, such as jQuery) to set values, such as viewport width or height: there is no need, as we can do this using CSS: <meta name="viewport" content="width=device-width"> Or by using this directive: <meta name="viewport" content="width=device-width, initial-scale=1"> This means that the browser should render the width of the page to the same width as the browser window—if, for example, the latter is 480px, then the width of the page will be 480px. To see what a difference not setting a viewport can have, take a look at this example screenshot: This example was created from displaying some text in Chrome, in iPhone 6 Plus emulation mode, but without a viewport. Now, let's take a look at the same text, but this time with a viewport directive set: Even though this is a simple example, do you notice any difference? Yes, the title color has changed, but more importantly the width of our display has increased. This is all part of setting a viewport—browsers frequently assume we want to view content as if we're on a desktop PC. If we don't tell it that the viewport area has been shrunken in size, it will try to shoe horn all of the content into a smaller size, which doesn't work very well! It's critical therefore that we set the right viewport for our design and that we allow it to scale up or down in size, irrespective of the device—we will explore this in more detail. Creating flexible grids When designing responsive websites, we can either create our own layout or use a grid system already created for use, such as Bootstrap. The key here though is ensuring that the mechanics of our layout sizes and spacing are set according to the content we want to display for our users, and that when the browser is resized in width, it realigns itself correctly. For many developers, the standard unit of measure has been pixel values; a key part of responsive design is to make the switch to using percentage and em (or preferably rem) units. The latter scale better than standard pixels, although there is a certain leap of faith needed to get accustomed to working with the replacements! Making media responsive A key part of our layout is, of course, images and text—the former though can give designers a bit of a headache, as it is not enough to simply use large images and set overflow: hidden to hide the parts that are not visible! Images in a responsive website must be as flexible as the grid used to host them—for some, this may be a big issue if the website is very content-heavy; now is a good time to consider if some of that content is no longer needed, and can be removed from the website. We can, of course simply apply display: none to any image which shouldn't be displayed, according to the viewport set. This isn't a good idea though, as content still has to be downloaded before styles can be applied; it means we're downloading more than is necessary! Instead, we should assess the level of content, make sure it is fully optimized, and apply percentage values so it can be resized automatically to a suitable size when the browser viewport changes. Constructing suitable breakpoints With content and media in place, we must turn our attention to media queries—there is a temptation to create queries that suit specific devices, but this can become a maintenance headache. We can avoid the headache by designing queries based on where the content breaks, rather than for specific devices—the trick to this is to start small and gradually enhance the experience, with the use of media queries: <link rel="stylesheet" media="(max-device-width: 320px)" href="mobile.css" /> <link rel="stylesheet" media="(min-width: 1600px)" href="widescreen.css" /> We should aim for around 75 characters per line, to maintain an optimal length for our content. Introducing flexible grid layouts For many years, designers have built layouts of different types—they may be as simple as a calling card website, right through to a theme for a content management system, such as WordPress or Joomla. The meteoric rise of accessing the Internet through different devices means that we can no longer create layouts that are tied to specific devices or sizes—we must be flexible! To achieve this flexibility requires us to embrace a number of changes in our design process – the first being the type of layout we should create. A key part of this is the use of percentage values to define our layouts; rather than create something from ground up, we can make use of a predefined grid system that has been tried and tested, as a basis for future designs. The irony is that there are lots of grid systems vying for our attention, so without further ado, let's make a start by exploring the different types of layouts, and how they compare to responsive designs. Understanding the different layout types A problem that has been faced by web designers for some years is the type of layout their website should use—should it be fluid, fixed width, have the benefits of being elastic or a hybrid version that draws on the benefits of a mix of these layouts? The type of layout we choose use will of course depend on client requirements—making it a fluid layout means we are effectively one step closer to making it responsive: the difference being that the latter uses media queries to allow resizing of content for different devices, not just normal desktops! To understand the differences, and how responsive layouts compare, let's take a quick look at each in turn: Fixed-Width layouts: These are constrained to a fixed with; a good size is around 960px, as this can be split equally into columns, with no remainder. The downside is the fixed width makes assumptions about the available viewport area, and that if the screen is too small or large, it results in scrolling or lots of which affects the user experience. Fluid layouts: Instead of using static values, we use percentage-based units; it means that no matter what the size of the browser window, our website will adjust accordingly. This removes the problems that surround fixed layouts at a stroke. Elastic layouts: They are similar to fluid layouts, but the constraints are measure by type or font size, using em or rem units; these are based on the defined font size, so 16px is 1 rem, 32px is 2 rem, and so on. These layouts allow for decent readability, with lines of 45-70 characters; font sizes are resized automatically. We may still see scrollbars appear in some instances, or experience some odd effects, if we zoom our page content. Hybrid layouts: They combine a mix of two or more of these different layout types; this allows us to choose static widths for some elements whilst others remain elastic or fluid. In comparison, responsive layouts take fluid layouts a step further, by using media queries to not only make our designs resize automatically, but present different views of our content on multiple devices. Exploring the benefits of flexible grid layouts Now that we've been introduced to grid layouts as a tenet of responsive design, it's a good opportunity to explore why we should use them. Creating a layout from scratch can be time-consuming, and need lots of testing—there are some real benefits from using a grid layout: Grids make for a simpler design: Instead of trying to develop the proverbial wheel, we can focus on providing the content instead; the infrastructure will have already been tested by the developer and other users. They provide for a visually appealing design: Many people prefer content to be displayed in columns, so grid layouts make good use of this concept, to help organize content on the page. Grids can of course adapt to different size viewports: The system they use makes it easier to display a single codebase on multiple devices, which reduces the effort required for developers to maintain and webmasters to manage. Grids help with the display of adverts: Google has been known to favor websites which display genuine content and not those where it believes the sole purpose of the website is for ad generation; we can use the grid to define specific area for adverts, without getting in the way of natural content. All in all, it makes sense to familiarize ourselves with grid layouts—the temptation is of course to use an existing library. There is nothing wrong with this, but to really get the benefit out of using them, it's good to understand some of the basics around the mechanics of grid layouts, and how this can help with the construction of our website. Making media responsive Our journey through the basics of adding responsive capabilities to a website has so far touched on how we make our layouts respond automatically to changes – it's time for us to do the same to media! If your first thought is that we need lots of additional functionality to make media responsive, then I am sorry to disappoint—it's much easier, and requires zero additional software to do it! Yes, all we need is just a text editor and a browser. I'll use my favorite editor, Sublime Text, but you can use whatever works for you. Over the course of this chapter, we will take a look in turn at images, video, audio and text, and we'll see how with some simple changes, we can make each of them responsive. Let's kick off our journey first, with a look at making image content responsive. Creating fluid images It is often said that images speak a thousand words. We can express a lot more with media than we can using words. This is particularly true for website selling products—a clear, crisp image clearly paints a better picture than a poor quality one! When constructing responsive websites, we need our images to adjust in size automatically—to see why this is important, go ahead and extract coffee.html from a copy of the code download that accompanies this book, and run it in a browser. Try resizing the window—we should see something akin to this: It doesn't look great, does it? Leaving aside my predilection for nature's finest bean drink (that is, coffee!), we can't have images that don't resize properly, so let's take a look at what is involved to make this happen: Go ahead and extract a copy of coffee.html and save it to our project area. We also need our image. This is in the img folder; save a copy to the img folder in our project area. In a new text file, add the following code, saving it as coffee.css: img { max-width: 100%; height: auto; } Revert back to coffee.html. You will see line 6 is currently commented out; remove the comment tags. Save the file, then preview it in a browser. If all is well, we will still see the same image as before, but this time try resizing it. This time around, our image grows or shrinks automatically, depending on the size of our browser window: Although our image does indeed fit better, there are a couple of points we should be aware of, when using this method: Sometimes you might see !important set as a property against the height attribute when working with responsive images; this isn't necessary, unless you're setting sizes in a website where image sizes may be overridden at a later date. We've set max-width to 100% as a minimum. You may also need to set a width value too, to be sure that your images do not become too big and break your layout. This is an easy technique to use, although there is a downside that can trip us up—spot what it is? If we use a high quality image, its file size will be hefty. We can't expect users of mobile devices to download it, can we? Don't worry though—there is a great alternative that has quickly gained popularity amongst browsers; we can use the <picture> element to control what is displayed, depending on the size of the available window. Implementing the <picture> element In a nutshell, responsive images are images that are displayed their optimal form on a page, depending on the device your website is being viewed from. This can mean several things: You want to show a separate image asset based on the user's physical screen size—this might be a 13.5 inch laptop, or a 5inch mobile phone screen. You want to show a separate image based on the resolution of the device, or using the device-pixel ratio (which is the ratio of device pixels to CSS pixels). You want to show an image in a specified image format (WebP, for example) if the browser supports it. Traditionally, we might have used simple scripting to achieve this, but it is at the risk of potentially downloading multiple images or none at all, if the script loads after images have loaded, or if we don't specify any image in our HTML and want the script to take care of loading images. Making video responsive Flexible videos are somewhat more complex than images. The HTML5 <video> maintains its aspect ratio just like images, and therefore we can apply the same CSS principle to make it responsive: video { max-width: 100%; height: auto !important; } Until relatively recently, there have been issues with HTML5 video—this is due in the main to split support for codecs, required to run HTML video. The CSS required to make a HTML5 video is very straightforward, but using it directly presents a few challenges: Hosting video is bandwidth intensive and expensive Streaming requires complex hardware support in addition to video It is not easy to maintain a consistent look and feel across different formats and platforms For many, a better alternative is to host the video through a third-party service such as YouTube—we can let them worry about bandwidth issues and providing a consistent look and feel; we just have to make it fit on the page! This requires a little more CSS styling to make it work, so let's dig in and find out what is involved. We clearly need a better way to manage responsive images! A relatively new tag for HTML5 is perfect for this job: <picture>. We can use this in one of three different ways, depending on whether we want to resize an existing image, display a larger one, or show a high-resolution version of the image. Implementing the <picture> element. In a nutshell, responsive images are images that are displayed their optimal form on a page, depending on the device your website is being viewed from. This can mean several things: You want to show a separate image asset based on the user's physical screen size—this might be a 13.5 inch laptop, or a 5inch mobile phone screen You want to show a separate image based on the resolution of the device, or using the device-pixel ratio (which is the ratio of device pixels to CSS pixels) You want to show an image in a specified image format (WebP, for example) if the browser supports it Traditionally, we might have used simple scripting to achieve this, but it is at the risk of potentially downloading multiple images or none at all, if the script loads after images have loaded, or if we don't specify any image in our HTML and want the script to take care of loading images. We clearly need a better way to manage responsive images! A relatively new tag for HTML5 is perfect for this job: <picture>. We can use this in one of three different ways, depending on whether we want to resize an existing image, display a larger one, or show a high-resolution version of the image. Making text fit on screen When building websites, it goes without saying but our designs clearly must start somewhere—this is usually with adding text. It's therefore essential that we allow for this in our responsive designs at the same time. Now is a perfect opportunity to explore how to do this—although text is not media in the same way as images or video, it is still content that has to be added at some point to our pages! With this in mind, let's dive in and explore how we can make our text responsive. Sizing with em units When working on non-responsive websites, it's likely that sizes will be quoted in pixel values – it's a perfectly acceptable way of working. However, if we begin to make our websites responsive, then content won't resize well using pixel values—we have to use something else. There are two alternatives - em or rem units. The former is based on setting a base font size that in most browsers defaults to 16px; in this example, the equivalent pixel sizes are given in the comments that follow each rule: h1 { font-size: 2.4em; } /* 38px */ p { line-height: 1.4em; } /* 22px */ Unfortunately there is an inherent problem with using em units—if we nest elements, then font sizes will be compounded, as em units are calculated relative to its parent. For example, if the font size of a list element is set at 1.4em (22px), then the font size of a list within a list becomes 30.8em (1.4 x 22px). To work around these issues, we can use rem values as a replacement—these are calculated from the root element, in place of the parent element. If you look carefully throughout many of the demos created for this book, you will see rem units being used to define the sizes of elements in that demo. Using rem units as a replacement The rem (or root em) unit is set to be relative to the root, instead of the parent – it means that we eliminate any issue with compounding at a stroke, as our reference point remains constant, and is not affected by other elements on the page. The downside of this is support—rem units are not supported in IE7 or 8, so if we still have to support these browsers, then we must fall back to using pixel or em values instead. This of course raises the question—should we still support these browsers, or is their usage of our website so small, as to not be worth the effort required to update our code? If the answer is that we must support IE8 or below, then we can take a hybrid approach—we can set both pixel/em and rem values at the same time in our code, thus: .article-body { font-size: 1.125rem; /* 18 / 16 */ font-size: 18px; } .caps, figure, footer { font-size: 0.875rem; /* 14 / 16 */ font-size: 14px; } Notice how we set rem values first? Browsers which support rem units will use these first; any that don't can automatically fall back to using pixel or em values instead. Exploring some examples Open a browser—let's go and visit some websites. Now, you may think I've lost my marbles, but stay with me: I want to show you a few examples. Let's take a look at a couple of example websites at different screen widths—how about this example, from my favorite coffee company, Starbucks: Try resizing the browser window—if you get small enough, you will see something akin to this: Now, what was the point of all that, I hear you ask? Well, it's simple—all of them use media queries in some form or other; CSS Tricks uses the queries built into WordPress, Packt's website is hosted using Drupal, and Starbuck's website is based around the Handlebars template system. The key here is that all use media queries to determine what should be displayed—throughout the course of this chapter, we'll explore using them in more detail, and see how we can use them to better manage content in responsive websites. Let's make a start with exploring their make up in more detail. Understanding media queries The developer Bruce Lee sums it up perfectly, when liking the effects of media queries to how water acts in different containers: "Empty your mind, be formless, shapeless - like water. Now you put water in a cup, it becomes the cup; you put water into a bottle it becomes the bottle; you put it in a teapot it becomes the teapot. Now water can flow or it can crash. Be water, my friend." We can use media queries to apply different CSS styles, based on available screen estate or specific device characteristics. These might include, but not be limited to the type of display, screen resolution or display density. Media queries work on the basis of testing to see if certain conditions are true, using this format: @media [not|only] [mediatype] and ([media feature]) { // CSS code; } We can use a similar principle to determine if entire style sheets should be loaded, instead of individual queries: <link rel="stylesheet" media="mediatype and|only|not (media feature)" href="myStyle.css"> Seems pretty simple, right? The great thing about media queries is that we don't need to download or install any additional software to use or create them – we can build most of them in the browser directly. Removing the need for breakpoints Up until now, we've covered how we can use breakpoints to control what is displayed, and when, according to which device is being used. Let's assume you're working on a project for a client, and have created a series of queries that use values such as 320px, 480px, 768px, and 1024px to cover support for a good range of devices. No matter what our design looks like, we will always be faced with two issues, if we focus on using specific screen viewports as the basis for controlling our designs: Keeping up with the sheer number of devices that are available The inflexibility of limiting our screen width So hold on: we're creating breakpoints, yet this can end up causing us more problems? If we're finding ourselves creating lots of media queries that address specific problems (in addition to standard ones), then we will start to lose the benefits of a responsive website—instead we should re-examine our website, to understand why the design isn't working and see if we can't tweak it so as to remove the need for the custom query. Ultimately our website and target devices will dictate what is required—a good rule of thumb is if we are creating more custom queries than a standard bunch of 4-6 breakpoints, then perhaps it is time to recheck our design! As an alternative to working with specific screen sizes, there is a different approach we can take, which is to follow the principle of adaptive design, and not responsive design. Instead of simply specifying a number of fixed screen sizes (such as for the iPhone 6 Plus or a Samsung Galaxy unit), we build our designs around the point at which the design begins to fail. Why? The answer is simple—the idea here is to come up with different bands, where designs will work between a lower and upper value, instead of simply specifying a query that checks for fixed screen sizes that are lower or above certain values. Understanding the importance of speed The advent of using different devices to access the internet means speed is critical – the time it takes to download content from hosting servers, and how quickly the user can interact with the website are key to the success of any website. Why it is important to focus on the performance of our website on the mobile devices or those devices with lesser screen resolution? There are several reasons for this—they include: 80 percent of internet users owns a smartphone Around 90 percent of users go online through a mobile device, with 48% of users using search engines to research new products Approximately 72 percent users abandon a website if the loading time is more than 5-6 seconds Mobile digital media time is now significantly higher than compared to desktop use If we do not consider statistics such as these, then we may go ahead and construct our website, but end up with a customer losing both income and market share, if we have not fully considered the extent of where our website should work. Coupled with this is the question of performance – if our website is slow, then this will put customers off, and contribute to lost sales. A study performed by San Francisco-based Kissmetrics shows that mobile users wait between 6 to 10 seconds before they close the website and lose faith in it. At the same time, tests performed by Guy Podjarny for the Mediaqueri.es website (http://mediaqueri.es) indicate that we're frequently downloading the same content for both large and small screens—this is entirely unnecessary, when with some simple changes, we can vary content to better suit desktop PCs or mobile devices! So what can we do? Well, before we start exploring where to make changes, let's take a look at some of the reasons why websites run slowly. Understanding why pages load slowly Although we may build a great website that works well across multiple devices, it's still no good if it is slow! Every website will of course operate differently, but there are a number of factors to allow for, which can affect page (and website) speed: Downloading data unnecessarily: On a responsive website, we may hide elements that are not displayed on smaller devices; the use of display: none in code means that we still download content, even though we're not showing it on screen, resulting in slower websites and higher bandwidth usage. Downloading images before shrinking them: If we have not optimized our website with properly sized images, then we may end up downloading images that are larger than necessary on a mobile device. We can of course make them fluid by using percentage-based size values, but this places extra demand on the server and browser to resize them. A complicated DOM in use on the website: When creating a responsive website, we have to add in a layer of extra code to manage different devices; this makes the DOM more complicated, and slow our website down. It is therefore imperative that we're not adding in any unnecessary elements that require additional parsing time by the browser. Downloading media or feeds from external sources: It goes without saying that these are not under our control; if our website is dependent on them, then the speed of our website will be affected if these external sources fail. Use of Flash: Websites that rely on heavy use of Flash will clearly be slower to access than those that don't use the technology. It is worth considering if our website really needs to use it; recent changes by Adobe mean that Flash as a technology is being retired in favor of animation using other means such as HTML5 Canvas or WebGL. There is one point to consider that we've not covered in this list—the average size of a page has significantly increased since the dawn of the Internet in the mid-nineties. Although these figures may not be 100% accurate, they still give a stark impression of how things have changed: 1995: At that time the average page size used to be around 14.1 KB in size. The reason for it can be that it contained around 2 or 3 objects. That means just 2 or 3 calls to server on which the website was hosted. 2008: The average page size increased to around 498 KB in size, with an average use of around 70 objects that includes changes to CSS, images and JavaScript. Although this is tempered with the increased use of broadband, not everyone can afford fast access, so we will lose customers if our website is slow to load. All is not lost though—there are some tricks we can use to help optimize the performance of our websites. Testing website compatibility At this stage, our website would be optimized, and tested for performance—but what about compatibility? Although the wide range of available browsers has remained relatively static (at least for the ones in mainstream use), the functionality they offer is constantly changing—it makes it difficult for developers and designers to handle all of the nuances required to support each browser. In addition, the wide range makes it costly to support—in an ideal world, we would support every device available, but this is impossible; instead, we must use analytical software to determine which devices are being used, and therefore worthy of support. Working out a solution If we test our website on a device such as an iPhone 6, then there is a good chance it will work as well on other Apple devices, such as iPads. The same can be said for testing on a mobile device such as a Samsung Galaxy S4—we can use this principle to help prioritize support for particular mobile devices, if they require more tweaks to be made than for other devices. Ultimately though, we must use analytical software to determine who visits our website; the information such as browser, source, OS and device used will help determine what our target audience should be. This does not mean we completely neglect other devices; we can ensure they work with our website, but this will not be a priority during development. A key point of note is that we should not attempt to support every device – this is too costly to manage, and we would never keep up with all of the devices available for sale! Instead, we can use our analytics software to determine which devices are being used by our visitors; we can then test a number of different properties: Screen size: This should encompass a variety of different resolutions for desktop and mobile devices. Connection speed: Testing across different connection speeds will help us understand how the website behaves, and identify opportunities or weaknesses where we may need to effect changes. Pixel density: Some devices will support higher a pixel density, which allows them to display higher resolution images or content; this will make it easier to view and fix any issues with displaying content. Interaction style: The ability to view the Internet across different devices means that we should consider how our visitors interact with the website: is it purely on a desktop, or do they use tablets, smartphones or gaming-based devices? It's highly likely that the former two will be used to an extent, but the latter is not likely to feature as highly. Once we've determined which devices we should be supporting, then there are a range of tools available for us to use, to test browser compatibility. These include physical devices (ideal, but expensive to maintain), emulators or online services (these can be commercial, or free). Let's take a look at a selection of what is available, to help us test compatibility. Exploring tools available for testing When we test a mobile or responsive website, there are factors which we need to consider before we start testing, to help deliver a website which looks consistent across all the devices and browsers. These factors include: Does the website look good? Are there any bugs or defects? Is our website really responsive? To help test our websites, we can use any one of several tools (either paid or free)—a key point to note though is that we can already get a good idea of how well our websites work, by simply using the Developer toolbar that is available in most browsers! Viewing with Chrome We can easily emulate a mobile device within Chrome, by pressing Ctrl + Shift + M; Chrome displays a toolbar at the top of the window, which allows us to select different devices: If we click on the menu entry (currently showing iPhone 6 Plus), and change it to Edit, we can add new devices; this allows us to set specific dimensions, user agent strings and whether the device supports high-resolution images: Although browsers can go some way to providing an indication of how well our website works, they can only provide a limited view – sometimes we need to take things a step further and use commercial solutions to test our websites across multiple browsers at the same time. Let's take a look at some of the options available commercially. Exploring our options If you've spent any time developing code, then there is a good chance you may already be aware of Browserstack (from https://www.browserstack.com)—other options include the following: GhostLab: https://www.vanamco.com/ghostlab/ Muir: http://labs.iqfoundry.com/ CrossBrowserTesting: http://www.crossbrowsertesting.com/ If however all we need to do is check our website for its level of responsiveness, then we don't need to use paid options – there are a number of websites that allow us to check, without needing to installing plugins or additional tools: Am I Responsive: http://ami.responsive.is ScreenQueries: http://screenqueri.es Cybercrab's screen check facility: http://cybercrab.com/screencheck Remy Sharp's check website: http://responsivepx.com We can also use bookmarklets to check to see how well our websites work on different devices – a couple of examples to try are at http://codebomber.com/jquery/resizer and http://responsive.victorcoulon.fr/; it is worth noting that current browsers already include this functionality, making the bookmarklets less attractive as an option. We have now reached the end of our journey through the essentials of creating responsive websites with nothing more than plain HTML and CSS code. We hope you have enjoyed it as much as we have with writing, and that it helps you make a start into the world of responsive design using little more than plain HTML and CSS. Summary This article covers the elements of RWD and introduces us to the different flexible grid layouts. Resources for Article: Responsive Web Design with WordPress Web Design Principles in Inkscape Top Features You Need to Know About – Responsive Web Design
Read more
  • 0
  • 1
  • 43398

article-image-developing-games-using-ai
Packt
08 Aug 2017
13 min read
Save for later

Developing Games Using AI

Packt
08 Aug 2017
13 min read
In this article, MicaelDaGraça, the author of the book, Practical Game AI Programming, we will be covering the following points to introduce you to using AI Programming for games for exploratory data analysis. A brief history of and solutions to game AI Enemy AI in video games From simple to smart and human-like AI Visual and Audio Awareness A brief history of and solutions to game AI To better understand how to overcome the present problems that game developers are currently facing, we need to dig a little bit on the history of video game development and take a look to the problems and solutions that was so important at the time, where some of them was so avant-garde that actually changed the entire history of video game design itself and we still use the same methods today to create unique and enjoyable games. One of the first relevant marks that is always worth to mention when talking about game AI is the chess computer programmed to compete against humans. It was the perfect game to start experimenting artificial intelligence, because chess usually requires a lot of thought and planning ahead, something that a computer couldn't do at the time because it was necessary to have human features in order to successfully play and win the game. So the first step was to make it able for the computer to process the game rules and think for itself in order to make a good judgment of the next move that the computer should do to achieve the final goal that was winning by check-mate. The problem, chess has many possibilities so even if the computer had a perfect strategy to beat the game, it was necessary to recalculate that strategy, adapting it, changing or even creating a new one every time something went wrong with the first strategy. Humans can play differently every time, what makes it a huge task for the programmers to input all the possibility data into the computer to win the game. So writing all the possibilities that could exist wasn't a viable solution, and because of that programmers needed to think again about the problem. Then one day they finally came across with a better solution, making the computer decide for itself every turn, choosing the most plausible option for each turn, that way the computer could adapt to any possibility of the game. Yet this involved another problem, the computer would only think on the short term, not creating any plans to defeat the human in the future moves, so it was easy to play against it but at least we started to have something going on. It would be necessary decades until someone defined the word "Artificial Intelligence" by solving the first problem that many researchers had by trying to create a computer that was capable of defeating a human player. Arthur Samuel is the person responsible for creating a computer that could learn for itself and memorize all the possible combinations. That way there wasn't necessary any human intervention and the computer could actually think for its own and that was a huge step that it's still impressive even for today standards. Enemy AI in video games Now let's move to the video game industry and analyze how the first enemies and game obstacles were programmed, was it that different from what we are doing now? Let's find out. Single player games with AI enemies started to appear in the 70's and soon some games started to elevate the quality and expectations of what defines a video game AI, some of those examples were released for the arcade machines, like Speed Race from Taito (racing video game) and Qwak(duck hunting using a light gun) or Pursuit(aircraft fighter) both from Atari. Other notable examples are the text based games released for the first personal computers, like Hunt the Wumpus and Star Trek that also had enemies. What made those games so enjoyable was precisely the AI enemies that didn't reacted like any other before because they had random elements mixed with the traditional stored patterns, making them unpredictable and a unique experience every time you played the game. But that was only possible due to the incorporation of microprocessors that expanded the capabilities of a programmer at that time. Space Invaders brought the movement patterns,Galaxian improved and added more variety making the AI even more complex, Pac-Man later on brought movement patterns to the maze genre. The influence that the AI design in Pac-Man had is just as significant as the influence of the game itself. This classic arcade game makes the player believe that the enemies in the game are chasing him but not in a crude manner. The ghosts are chasing the player (or evading the player) in a different way as if they have an individual personality. This gives people the illusion that they are actually playing against 4 or 5 individual ghosts rather than copies of a same computer enemy. After that Karate Champ introduced the first AI fighting character, Dragon Quest introduced tactical system for the RPG genre and over the years the list of games that explored artificial intelligence and used it to create unique game concepts kept expanding and all of that came from a single question, how can we make a computer capable of beating a human on a game. All the games mentioned above have a different genre and they are unique in their style but all of them used the same method for the AI, that is called Finite State Machine. Here the programmer input all the behaviors necessary for the computer to challenge the player, just like the first computer that played chess. The programmer defined exactly how the computer should behave in different occasions in order to move, avoid, attack or perform any other behavior in order to challenge the player and that method is used even in the latest big budget games of today. From simple to smart and human-like AI Programmers face many challenges while developing an AI character but one of the greatest challenges is adapting the AI movement and behavior in relation to what the player is currently doing or will do in future actions. The difficulty exists because the AI is programmed with pre-determined states, using probability or possibility maps in order to adapt his movement and behavior according to the player. This technic can become very complex if the programmer extends the possibilities of the AI decisions, just like the chess machine that has all the possible situations that may occur on the game. It's a huge task for the programmer because it's necessary to determine what the player can do and how the AI will react to each action of the player and that takes a lot of CPU power. To overcome that challenge programmers started to mix possibility maps with probabilities and other technics that let the AI decide for itself on how it should react according to the player actions. These factors are important to consider while developing an AI that elevates the game quality as we are about to discover. Games kept evolving and players got even more exigent, not only with the visual quality, as well with the capabilities of the AI enemies and also with the allied characters. To deliver new games that took in consideration the player expectations, programmers started to write even more states for each character, creating new possibilities, more engaging enemies implementing important allies characters, more things for the player to do and a lot more features that helped re-defined different genres and creating new ones. Of course this was also possible because of the technology that also kept improving, allowing the developers to explore even more the artificial intelligence in the video games. A great example of this that is worth to mention is Metal Gear Solid, the game brought a new genre to the video game industry by implementing stealth elements, instead of the popular straight forward and shooting. But those elements couldn't be fully explored as Hideo Kojima intended because of the hardware limitations at the time. Jumping forward from the 3th to the 5th generation of consoles, Konami and Hideo Kojima presented the same title but this time with a lot more interactions, possibilities and behaviors from the AI elements of the game, making it so successful and important in the video game history that it's easy to see its influence in a large number of games that came after Metal Gear Solid. Metal Gear Solid - Sony Playstation 1 Visual and Audio Awareness The game in the above screenshot implemented visual and audio awareness to the enemy. This feature stablished the genre that we know today as a stealth game. So the game uses Pathfinding and Finite States Machine, features that already came from the beginning of the video game industry but in order to create something new they also created new features such as Interaction with the Environment, Navigation Behavior, Visual/Audio Awareness and AI interaction. A lot of things that didn't existed at the time but that is widely used today even on different game genres such as Sports, Racing, Fighting or FPS games were also introduced. After that huge step for game design, developers still faced other problems or should i say, this new possibilities brought even more problems, because it was not perfect. The AI still didn't react as a real person and many other elements was necessary to implement, not only on stealth games but in all other genres and one in particular needed to improve their AI to make the game feel realistic. We are talking about sport games, especially those who tried to simulate the real world team behaviors such as Basketball or Football. If we think about it, the interaction with the player is not the only thing that we need to care about, we left the chess long time ago, where it was 1 vs 1. Now we want more and watching other games getting realistic AI behaviors, sport fanatics started to ask that same features on their favorite games, after all those games was based on real world events and for that reason the AI should react realistically as possible. At this point developers and game designers started to take in consideration the AI interaction with itself and just like the enemies from Pac-Man, the player should get the impression that each character on the game, thinks for itself and reacts differently from the others. If we analyze it closely the AI that is present on a sports game is structured like an FPS or RTS game is, using different animation states, general movements, interactions, individual decisions and finally tactic and collective decisions. So it shouldn't be a surprise that sports games could reach the same level of realism as the other genres that greatly evolved in terms of AI development, .However there's a few problems that only sport games had at the time and it was how to make so many characters on the same screen react differently but working together to achieve the same objective. With this problem in mind, developers started to improve the individual behaviors of each character, not only for the AI that was playing against the player, but also the AI that was playing alongside with the player. Once again Finite State Machines made a crucial part of the Artificial Intelligence but the special touch that helped to create a realistic approach in the sports genre was the anticipation and awareness used on stealth games. The computer needed to calculate what the player was doing, where the ball was going and coordinate all of that, plus giving a false impression of a team mindset towards the same plan. Combining the newly features used on the new genre of stealth games with a vast number of characters on the same screen, it was possible to innovate the sports genre by creating a sports simulation type of game that has gained so much popularity over the years. This helps us to understand that we can use almost the same methods for any type of game even if it looks completely different, the core principles that we saw on the computer that played chess it's still valuable to the sport game released 30 years later. Let's move on to our last example that also has a great value in terms of how an AI character should behave to make it more realistic, the game is F.E.A.R. developed by Monolith Productions. What made this game so special in terms of Artificial Intelligence was the dialogues between the enemy characters. While it wasn't an improvement in a technical point of view, it was definitely something that helped to showcase all of the development work that was put on the characters AI and this is so crucial because if the AI don't say it, it didn't happen. This is an important factor to take in consideration while creating a realistic AI character, giving the illusion that it's real, the false impression that the computer reacts like humans and humans interact so AI should do the same. Not only the dialogues help to create a human like atmosphere, it also helps to exhale all of the development put on the character that otherwise the player wouldn't notice that it was there. When the AI detects the player for the first time, he shouts that he found it, when the AI loses sight of the player, he also express that emotion. When the squad of AI's are trying to find the player or ambush him, they speak about that, living the player imagining that the enemy is really capable of thinking and planning against him. Why is this so important? Because if we only had numbers and mathematical equations to the characters, they will react that way, without any human feature, just math and to make it look more human it's necessary to input mistakes, errors and dialogues inside the character AI, just to distract the player from the fact that he's playing against a machine. The history of video game artificial intelligence is still far away from perfect and it's possible that it would take us decades to improve just a little bit what we achieve from the early 50's until this present day, so don't be afraid of exploring what you are about to learn, combine, change or delete some of the things to find different results, because great games did it in the past and they had a lot of success with it. Summary In this article we learned about the AI impact in the video game history, how everything started from a simple idea to have a computer to compete against humans in traditional games and how that naturally evolved into the the world of video games. We also learned about the challenges and difficulties that were present since the day one and how coincidentally programmers kept facing and still face the same problems.
Read more
  • 0
  • 0
  • 43322

article-image-building-jsf-forms
Packt
25 Sep 2015
16 min read
Save for later

Building JSF Forms

Packt
25 Sep 2015
16 min read
 In this article by Peter Pilgrim, author of the book Java EE 7 Web Application Development, we will learn about Java Server Faces as an example of a component-oriented web application framework. As opposed to Java EE 8 MVC, WebWork or Apache Struts, which are known as request-oriented web application frameworks. A request-oriented framework is one where the information flow is web request and response. Such frameworks provide ability and structure above the javax.servlet.http.HttpServletRequest and javax.servlet.http.HttpServletResponse objects, but there are no special user interface components. The application user with additional help must program mapping of parameters and attributes to the data entity models. The developer therefore has to write parsing logic. It is important to understand that component-oriented frameworks like JSF have their detractors. The quick inspection of the code resembles components like in standalone client like Java Swing or even JavaFX, but behind the scenes lurks the very same HttpServletRequest and HttpServletResponse. Hence, a competent JSF developer has to be still aware of the Servlet API and the underlying servlet scopes. This was a valid criticism in 2004 and in the digital marketing age, a digital developer has to know not only Servlet, we can presume they would be open to learning other technologies such as JavaScript. (For more resources related to this topic, see here.) Create Retrieve Update and Delete In this article, we are going to solve everyday problem with JSF. Java EE framework and enterprise application are about solving data entry issues. Unlike social networking software that is built with a different architecture and non-functional requirements: scalability, performance, statelessness, and eventual consistency, Java EE applications are designed for stateful work flows. Following is the screenshot of the page view for creating contact details: The preceding screenshot is the JSF application jsf-crud, which shows contact details form. Typically an enterprise application captures information from a web user, stores it in a data store, allows that information to be retrieved and edited. There is usually an option to delete the user's information. In software engineering, we call this idiom, Create Retrieve Update and Delete (CRUD). What constitutes actual deletion of user and customer data is a matter ultimately that affects business owners who are under pressure to conform to local and international law that define privacy and data protection. Basic create entity JSF form Let's create a basic form that captures the user's name, e-mail address and date of birthday. We shall write this code using HTML5 and take advantage of the Bootstrap for modern day CSS and JavaScript. See http://getbootstrap.com/getting-started/. Here is the JSF Facelet view createContact.xhtml: <!DOCTYPE html> <html > <h:head> <meta charset="utf-8"/> <title>Demonstration Application </title> <link href="#{request.contextPath}/resources/styles/bootstrap.css" rel="stylesheet"/> <link href="#{request.contextPath}/resources/styles/main.css" rel="stylesheet"/> </h:head> <h:body> <div class="main-container"> <div class="header-content"> <div class="navbar navbar-inverse" role="navigation"> </div> </div><!-- headerContent --> <div class="mainContent"> <h1> Enter New Contact Details </h1> <h:form id="createContactDetail" styleClass="form- horizontal" p_role="form"> ... </h:form> </div><!-- main-content --> <div class="footer-content"> </div> <!-- footer-content --> </div> <!-- main-container --> </h:body> <script src="#{request.contextPath}/resources/javascripts/jquery- 1.11.0.js"></script> <script src="#{request.contextPath}/resources/javascripts/bootstrap.js"> </script> <script src="#{request.contextPath}/resources/app/main.js"> </script> </html> You already recognise the <h:head> and <h:body> JSF custom tags. Because the type if a Facelet view (*.xhtml), the document is actually must be well formed like a XML document. You should have noticed that certain HTML5 elements tags like <meta> are closed and completed: the XHTML document must be well-formed in JSF. Always close XHTML elements The typical e-commerce application has web pages with standard HTML with <meta>, <link>, and <br> tags. In XHTML and Facelet views these tags, which web designers normally leave open and hanging, must be closed. Extensible Markup Language (XML) is less forgiving and XHTML, which is derived from XML, must be well formed. The new tag <h:form> is a JSF custom tag that corresponds to the HTML form element. A JSF form element shares many on the attributes of the HTML partner. You can see the idattribute is just the same. However, instead of the class attribute, we have in JSF, the styleClass attribute, because in Java the method java.lang.Object.getClass() is reserved and therefore it cannot be overridden. What is the JSF request context path expression? The curious mark up around the links to the style sheets, JavaScript and other resources is the expression language #{request.contextPath}. The expression reference ensures that the web application path is added to the URL of JSF resources. Bootstrap CSS itself relies on font glyph icons in a particular folder. JSF images, JavaScript modules files and CSS files should be placed in the resources folder of the web root. The p:role attribute is an example of JSF pass-through attribute, which informs the JSF render kit to send through the key and value to the rendered output. The pass-through attributes are important key addition in JSF 2.2, which is part of Java EE 7. They allow JSF to play well with recent HTML5 frameworks such as Bootstrap and Foundation (http://foundation.zurb.com/). Here is an extract of the rendered HTML source output. <h1> Enter New Contact Details </h1> <form id="createContactDetail" name="createContactDetail" method="post" action="/jsf-crud-1.0- SNAPSHOT/createContactDetail.xhtml" class="form-horizontal" enctype="application/x-www-form-urlencoded" role="form"> <input type="hidden" name="createContactDetail" value="createContactDetail" /> JSF was implemented before the Bootstrap was created at Twitter. How could the JSF designer retrofit the framework to be compatible with recent HTML5, CSS3, and JavaScript innovations? This is where pass-through attribute help. By declaring the XML namespace in the XHTML with the URI http:// is simply passed through to the output. The pass-through attributes allow JSF to easily handle HTML5 features such as placeholders in text input fields, as we will exploit from now onwards. If you are brand new to web development, you might be scared of the markup that appears over complicated. There are lots and lots of DIV HTML elements, which are often created by page designers and Interface Developers. This is the historical effect and just the way HTML and The Web has evolved over time. The practices of 2002 have no bearing on 2016. Let's take a deeper look at the <h:form> and fill in the missing details. Here is the extracted code: <h:form id="createContactDetail" styleClass="form-horizontal" p_role="form"> <div class="form-group"> <h:outputLabel for="title" class="col-sm-3 control-label"> Title</h:outputLabel> <div class="col-sm-9"> <h:selectOneMenu class="form-control" id="title" value="#{contactDetailController.contactDetail.title}"> <f:selectItem itemLabel="--" itemValue="" /> <f:selectItem itemValue="Mr" /> <f:selectItem itemValue="Mrs" /> <f:selectItem itemValue="Miss" /> <f:selectItem itemValue="Ms" /> <f:selectItem itemValue="Dr" /> </h:selectOneMenu> </div> </div> <div class="form-group"> <h:outputLabel for="firstName" class="col-sm-3 control-label"> First name</h:outputLabel> <div class="col-sm-9"> <h:inputText class="form-control" value="#{contactDetailController.contactDetail.firstName}" id="firstName" placeholder="First name"/> </div> </div> ... Rinse and Repeat for middleName and lastName ... <div class="form-group"> <h:outputLabel for="email" class="col-sm-3 control-label"> Email address </h:outputLabel> <div class="col-sm-9"> <h:inputText type="email" class="form-control" id="email" value="#{contactDetailController.contactDetail.email}" placeholder="Enter email"/> </div> </div> <div class="form-group"> <h:outputLabel class="col-sm-3 control-label"> Newsletter </h:outputLabel> <div class="col-sm-9 checkbox"> <h:selectBooleanCheckbox id="allowEmails" value="#{contactDetailController.contactDetail.allowEmails}"> Send me email promotions </h:selectBooleanCheckbox> </div> </div> <h:commandButton styleClass="btn btn-primary" action="#{contactDetailController.createContact()}" value="Submit" /> </h:form> This is from is built using the Bootstrap CSS styles, but we shall ignore the extraneous details and concentrate purely on the JSF custom tags. The <h:selectOneMenu> is a JSF custom tag that corresponds to the HTML Form Select element. The <f:selectItem> tag corresponds to the HTML Form Select Option element. The <h:inputText> tag corresponds to the HTML Form Input element. The <h:selectBooleanCheckbox> tag is a special custom tag to represent a HTML Select with only one Checkbox element. Finally, <h:commandButton> represents a HTML Form Submit element. JSF HTML Output Label The <h:outputLabel> tag renders the HTML Form Label element. <h:outputLabel for="firstName" class="col-sm-3 control-label"> First name</h:outputLabel> Developers should prefer this tag with conjunction with the other associated JSF form input tags, because the special for attribute targets the correct sugared identifier for the element. Here is the rendered output: <label for="createContactDetail:firstName" class="col-sm-3 control-label"> First name</label> We could have written the tag using the value attribute, so that looks like this: <h:outputLabel for="firstName" class="col-sm-3 control-label" value="firstName" /> It is also possible to take advantage of internationalization at this point, so just for illustration, we could rewrite the page content as: <h:outputLabel for="firstName" class="col-sm-3 control-label" value="${myapplication.contactForm.firstName}" /> JSF HTML Input Text The <h:inputText> tag allows data to be entered in the form like text. <h:inputText class="form-control" value="#{contactDetailController.contactDetail.firstName}" id="firstName" placeholder="First name"/> The value attribute represents a JSF expression language and the clue is the evaluation string starts with a hash character. Expression references a scoped backing bean ContactDetailController.java with the name of contactDetailController. In JSF 2.2, there are now convenience attributes to support HTML5 support, so the standard id, class, and placeholder attributes work as expected. The rendered output is like this: <input id="createContactDetail:firstName" type="text" name="createContactDetail:firstName" class="form-control" /> Notice that the sugared identifier createContactDetails:firstName matches the output of the <h:outputLabel> tag. JSF HTML Select One Menu The <h:selectOneMenu> tag generates a single select drop down list. If fact, it is part of a family of selection type custom tags. See the <h:selectBooleanCheckbox> in the next section. In the code, we have the following code: <h:selectOneMenu class="form-control" id="title" value="#{contactDetailController.contactDetail.title}"> <f:selectItem itemLabel="--" itemValue="" /> <f:selectItem itemValue="Mr" /> <f:selectItem itemValue="Mrs" /> <f:selectItem itemValue="Miss" /> <f:selectItem itemValue="Ms" /> <f:selectItem itemValue="Dr" /> </h:selectOneMenu> The <h:selectOneMenu> tag corresponds to a HTML Form Select tag. The value attribute is again JSF expression language string. In JSF, we can use another new custom tag <f:selectItem> to define in place option item. The <f:selectItem> tag accepts an itemLabel and itemValue attribute. If you set the itemValue and do not specify the itemLabel, then the value becomes the label. So for the first item the option is set to —, but the value submitted to the form is a blank string, because we want to hint to the user that there is a value that ought be chosen. The rendered HTML output is instructive: <select id="createContactDetail:title" size="1" name="createContactDetail:title" class="form-control"> <option value="" selected="selected">--</option> <option value="Mr">Mr</option> <option value="Mrs">Mrs</option> <option value="Miss">Miss</option> <option value="Ms">Ms</option> <option value="Dr">Dr</option> </select> JSF HTML Select Boolean Checkbox The <h:selectBooleanCheckbox> custom tag is special case of selection where there is only one item that the user can choose. Typically, in web application, you will find such an element is the finally terms and condition form or usually in marketing e-mail section in an e-commerce application. In the targeted managed bean, the only value must be a Boolean type. <h:selectBooleanCheckbox for="allowEmails" value="#{contactDetailController.contactDetail.allowEmails}"> Send me email promotions </h:selectBooleanCheckbox> The rendered output for this custom tag looks like: <input id="createContactDetail:allowEmails" type="checkbox" name="createContactDetail:allowEmails" /> JSF HTML Command Button The <h:commandButton> custom tags correspond to the HTML Form Submit element. It accepts an action attribute in JSF that refers to a method in a backing bean. The syntax is again in the JSF expression language. <h:commandButton styleClass="btn btn-primary" action="#{contactDetailController.createContact()}" value="Submit" /> When the user presses this submit, the JSF framework will find the named managed bean corresponding to contactDetailController and then invoke the no arguments method createContact(). In the expression language, it is important to note that the parentheses are not required, because the interpreter or Facelets automatically introspects whether the meaning is an action (MethodExpression) or a value definition (ValueExpression). Be aware, most examples in the real world do not add the parentheses as a short hand. The value attribute denotes the text for the form submit button. We have could written the tag in the alternative way and achieve the same result. <h:commandButton styleClass="btn btn-primary" action="#{contactDetailController.createContact()}" > Submit </h:commandButton> The value is taken from the body content of the custom tag. The rendered output of the tag looks like something this: <input type="submit" name="createContactDetail:j_idt45" value="Submit" class="btn btn-primary" /> <input type="hidden" name="javax.faces.ViewState" id="j_id1:javax.faces.ViewState:0" value="-3512045671223885154:3950316419280637340" autocomplete="off" /> The above code illustrates the output from the JSF renderer in the Mojarra implementation (https://javaserverfaces.java.net/), which is the reference implementation. You can clearly see that the renderer writes a HTML submit and hidden element in the output. The hidden element captures information about the view state that is posted back to the JSF framework (postback), which allows it to restore the view. Finally, here is a screenshot of this contact details form: The contact details input JSF form with additional DOB fields Now let's examine the backing bean also known as the controller. Backing Bean controller For our simple POJO form, we need a backing bean or a modern day JSF developer parlance a managed bean controller. This is the entire code for the ContactDetailController: package uk.co.xenonique.digital; import javax.ejb.EJB; import javax.inject.Named; import javax.faces.view.ViewScoped; import java.util.List; @Named("contactDetailController") @ViewScoped public class ContactDetailController { @EJB ContactDetailService contactDetailService; private ContactDetail contactDetail = new ContactDetail(); public ContactDetail getContactDetail() { return contactDetail; } public void setContactDetail( ContactDetail contactDetail) { this.contactDetail = contactDetail; } public String createContact() { contactDetailService.add(contactDetail); contactDetail = new ContactDetail(); return "index.xhtml"; } public List<ContactDetail> retrieveAllContacts() { return contactDetailService.findAll(); } } For this managed bean, let's introduce you to a couple of new annotations. The first annotation is called @javax.inject.Named and it is declares this POJO to be CDI managed bean, which also simultaneously declares a JSF controller. Here, we declare explicitly the value of name of the managed bean as contactDetailController. This is actually the default name of the managed bean, so we could have left it out. We can also write an alternative name like this: @Named("wizard") @ViewScoped public class ContactDetailController { /* .. . */ } Then JSF would give as the bean with the name wizard. The name of the managed bean helps in expression language syntax. When we are talking JSF, we can interchange the term backing bean with managed bean freely. Many professional Java web develop understand that both terms mean the same thing! The @javax.faces.view.ViewScoped annotation denotes the controller has a life cycle of view scoped. The view scoped is designed for the situation where the application data is preserved just for one page until the user navigates to another page. As soon as the user navigates to another page JSF destroys the bean. JSF removes the reference to the view-scoped bean from its internal data structures and the object is left for garbage collector. The @ViewScoped annotation is new in Java EE 7 and JSF 2.2 and fixes a bug between the Faces and CDI specifications. This is because the CDI and JSF were developed independently. By looking at the Java doc, you will find an older annotation @javax.faces.bean.ViewScoped, which is come from JSF 2.0, which was not part of the CDI specification. For now, if you choose to write @ViewScoped annotated controllers you probably should use @ManagedBean. We will explain further on. The ContactDetailController also has dependency on an EJB service endpoint ContactDetailService, and most importantly is has a bean property ContactDetail. Note that getter and setter methods and we also ensure that the property is instantiated during construction time. We turn our attention to the methods. public String createContact() { contactDetailService.add(contactDetail); contactDetail = new ContactDetail(); return "index.xhtml"; } public List<ContactDetail> retrieveAllContacts() { return contactDetailService.findAll(); } The createContact() method uses the EJB to create a new contact detail. It returns a String, which is the next Facelet view index.xhtml. This method was referenced by the <h:commandButton>. The retrieveAllContacts() method invokes the data service to fetch the list collection of entities. This method will be referenced by another page. Summary In this article, we learned about JSF forms we explore HTML and core JSF custom tags in building the answer to one of the sought questions on the Internet. It is surprising that this simple idea is considered difficult to program. We built digital JSF form that initially created a contact detail. We saw the Facelet view, the managed bean controller, the stateful session EJB and the entity Resources for Article: Further resources on this subject: WebSockets in Wildfly[article] Prerequisites[article] Contexts and Dependency Injection in NetBeans [article]
Read more
  • 0
  • 0
  • 43283
article-image-artificial-intelligence-data-science-and-big-data-in-2019-what-really-mattered
Richard Gall
16 Dec 2019
6 min read
Save for later

Artificial intelligence, data science, and big data in 2019: what really mattered

Richard Gall
16 Dec 2019
6 min read
The techlash hasn’t died down - it’s just become normalized. Barely a day passes without a new scandal emerging, from questionable surveillance to racist AI algorithms. But it hasn’t all been bad: while negatives get a lot of attention (and so they should - the consequences of tech can be lethal, both societally and literally), there was still plenty to get excited about. And for those working in the data profession - as analysts, scientists, and engineers, there were several important trends that really helped to define where we are now from a purely practical perspective - as well as hinting at where we might go in the future. With just a few weeks left to go of the year (and the decade!), let’s look at some of the key things that defined this year in the field of data science and data engineering. The growth of PyTorch TensorFlow is undoubtedly the most popular deep learning framework. You might even say that its role in popularizing deep learning and artificial intelligence has been understated. But while TensorFlow has held its place for some time, 2019 was the year when things started to change. Look, for example at this Google Trends graph (and yes, I know it’s not in any way scientific): As you can see TensorFlow hit its stride pretty early on. It’s only in the last 12 months or so that PyTorch has been narrowing the gap. One of the reasons for this is the fact that PyTorch 1.0 was released at the end of last year. This has been the foundation that has spurred its growth over the last 12 months, effectively announcing its ‘official’ arrival on the scene. With Facebook (PyTorch’s creator) building on this foundation throughout the year with a few small but important releases. PyTorch 1.3, for example, which was released at the PyTorch Developer Conference in October, included a number of ‘experimental’ new features, including named tensors and PyTorch Mobile. Another reason for PyTorch’s growth this year is that it is finding traction in the research field. This article provides some hard data that proves that PyTorch is starting to grow in this area, citing the tool’s comparable simplicity, API and performance as the reasons that it’s undermining TensorFlow’s utter dominance of the field. Find our PyTorch bundle, and other data bundles, here. Grab 5 titles for just $25. TensorFlow 2.0 While PyTorch has grown significantly in 2019, TensorFlow is nevertheless still holding its place at the top of the deep learning rankings. And TensorFlow 2.0 has undoubtedly cemented its position. With the alpha release getting developers excited since March, the full launch of 2.0 marked an important milestone for the project. The key difference between TensorFlow 2.0 and 1.0 is ultimately accessibility and ease of use. Despite its massive popularity, TensorFlow 1.0 always had a reputation for being a little more difficult to use than many other deep learning tools. The team were clearly aware of this and have done a lot to make life easier for TensorFlow developers. “With tight integration of Keras into TensorFlow, eager execution by default, and Pythonic function execution,” the team write in the release notes, “TensorFlow 2.0 makes the experience of developing applications as familiar as possible for Python developers.” When placed alongside the exciting development of PyTorch, it’s clear that these two tools are going to be defining deep learning in the year - or years - to come. Get up to date with what's new in TensorFlow 2.0 with TensorFlow 2.0 Quick Start Guide. Stream processing with Kafka, Flink, and others Dealing with large quantities of data in real-time is now the cutting-edge of big data. It’s for this reason that this year we’ve started to see stream processing gain headway in the mainstream. Although it’s been an important technique for organizations with data-intensive needs, the use of cloud and hybrid solutions - as well as an overall awareness of the opportunities of real-time data - has become truly mainstream. In turn, this is giving new prominence to a range of stream-processing platforms. Kafka, Spark, and Flink are just three of the most well-known names in this space, but the market is undoubtedly growing. Another key driver here is Nvidia - as one of the leading hardware companies, it deserves a lot of credit for helping to make massive processing power accessible to organizations that wouldn’t have had a chance just a few years ago. With CUDA, Nvidia’s parallel programming paradigm for GPUs, the company is helping all sorts of users to leverage stream processing in different ways. Get started with Apache Kafka with Apache Kafka Quick Start Guide. Data analysis on the cloud Although I've already mentioned how influential TensorFlow was in popularizing deep learning, today public cloud is going even further. It’s making artificial intelligence and analytics accessible to new roles (thinking here about tools like Azure Machine Learning Studio and Amazon SageMaker), as well as making it easier to build and deploy machine learning models in applications and products. In recent weeks, Microsoft has made another step in its bid to eat into AWS’s market share with Azure Synapse. Essentially a next generation Azure SQL Warehouse, Synapse is designed to bridge the gap between data lake and data warehouse - so, offering massive scale, and improving analytical speed. It will be interesting to see how this plays with the wider market. AWS might respond with something similar - but the onus remains on Microsoft to shift mindshare; AWS will want to consolidate its powerful position. Security It would be wrong to suggest that security is a new issue in the world of data engineering and analytics. But in 2019 it’s become almost impossible to think about the two domains as separate from one another. This cuts two different ways: on the one hand the emphasis on securing data and protecting privacy has never been greater. On the other hand, artificial intelligence and machine learning have started to play a critical part in the way that we monitor and identify threats to our systems. To a certain extent this expresses the double bind that data poses: the amount of data at our disposal is a nightmare from a governance and architectural perspective, but it is, at the same time, a way of mitigating that very nightmare. All in all, then, a bit of a vicious cycle, but nevertheless a reminder that however big our data gets, and however much we try to automate, there will always be a need for humans to think creatively and strategically about how we actually go about solving problems. Explore Packt's security bundles now. For more technology eBooks and videos to prepare you for 2020, head to the Packt store.
Read more
  • 0
  • 0
  • 43239

article-image-how-to-recover-deleted-data-from-an-android-device-tutorial
Sugandha Lahoti
04 Feb 2019
11 min read
Save for later

How to recover deleted data from an Android device [Tutorial]

Sugandha Lahoti
04 Feb 2019
11 min read
In this tutorial, we are going to learn about data recovery techniques that enable us to view data that has been deleted from a device. Deleted data could contain highly sensitive information and thus data recovery is a crucial aspect of mobile forensics. This article will cover the following topics: Data recovery overview Recovering data deleted from an SD card Recovering data deleted from a phone's internal storage This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book is a comprehensive guide to Android forensics, from setting up the workstation to analyzing key artifacts. Data recovery overview Data recovery is a powerful concept within digital forensics. It is the process of retrieving deleted data from a or SD card when it cannot be accessed normally. Being able to recover data that has been deleted by a user could help solve civil or criminal cases. This is because many accused just delete data from their device hoping that the evidence will be destroyed. Thus, in most criminal cases, deleted data could be crucial because it may contain information the user wanted to erase from their Android device. For example, consider the scenario where a mobile phone has been seized from a terrorist. Wouldn't it be of the greatest importance to know which items were deleted by them? Access to any deleted SMS messages, pictures, dialed numbers, and so on could be of critical importance as they may reveal a lot of sensitive information. From a normal user's point of view, recovering data that has been deleted would usually mean referring to the operating system's built-in solutions, such as the Recycle Bin in Windows. While it's true that data can be recovered from these locations, due to an increase in user awareness, these options often don't work. For instance, on a desktop computer, people now use Shift + Del whenever they want to delete a file completely from their desktop. Similarly, in mobile environments, users are aware of the restore operations provided by apps and so on. In spite of these situations, data recovery techniques allow a forensic investigator to access the data that has been deleted from the device. With respect to Android, it is possible to recover most of the deleted data, including SMS, pictures, application data, and so on. But it is important to seize the device in a proper manner and follow certain procedures, otherwise, data might be deleted permanently. To ensure that the deleted data is not lost forever, it is recommended to keep the following points in mind: Do not use the phone for any activity after seizing it. The deleted text message exists on the device until space is needed by some other incoming data, so the phone must not be used for any sort of activity to prevent the data from being overwritten. Even when the phone is not used, without any intervention from our end, data can be overwritten. For instance, an incoming SMS would automatically occupy the space, which overwrites the deleted data. Also, remote wipe commands can wipe the content present on the device. To prevent such events, you can consider the option of placing the device in a Faraday bag. Thus, care should be taken to prevent delivery of any new messages or data through any means of communication. How can deleted files be recovered? When a user deletes any data from a device, the data is not actually erased from the device and continues to exist on it. What gets deleted is the pointer to that data. All filesystems contain metadata, which maintains information about the hierarchy of files, filenames, and so on. Deletion will not really erase the data but instead removes the file system metadata. Thus, when text messages or any other files are deleted from a device, they are just made invisible to the user, but the files are still present on the device as long as they are not overwritten by some other data. Hence, there is the possibility of recovering them before new data is added and occupies the space. Deleting the pointer and marking the space as available is an extremely fast operation compared to actually erasing all the data from the device. Hence, to increase performance, operating systems just delete the metadata. Recovering deleted data on an Android device involves three scenarios: Recovering data that is deleted from the SD card such as pictures, videos, and so on Recovering data that is deleted from SQLite databases such as SMS, chats, web history, and so on Recovering data that is deleted from the device's internal storage The following sections cover the techniques that can be used to recover deleted data from SD cards, and the internal storage of the Android device. Recovering deleted data from SD cards Data present on an SD card can reveal lots of information that is useful during a forensic investigation. The fact that pictures, videos, voice recordings, and application data are stored on the SD card adds weight to this. As mentioned in the previous chapters, Android devices often use FAT32 or exFAT file systems on their SD card. The main reason for this is that these file systems are widely supported by most operating systems, including Windows, Linux, and macOS X. The maximum file size on a FAT32 formatted drive is around 4 GB. With increasingly high-resolution formats now available, this limit is commonly reached, that's why newer devices support exFAT: this file system doesn't have such limitations. Recovering the data deleted from an external SD is pretty easy if it can be mounted as a drive. If the SD card is removable, it can be mounted as a drive by connecting it to a computer using a card reader. Any files can be transferred to the SD card while it's mounted. Some of the older devices that use USB mass storage also mount the device to a drive when connected through a USB cable. In order to make sure that the original evidence is not modified, a physical image of the disk is taken and all further experimentation is done on the image itself. Similarly, in the case of SD card analysis, an image of the SD card needs to be taken. Once the imaging is done, we have a raw image file. In our example, we will use FTK Imager by AccessData, which is an imaging utility. In addition to creating disk images, it can also be used to explore the contents of a disk image. The following are the steps that can be followed to recover the contents of an SD card using this tool: Start FTK Imager and click on File and then Add Evidence Item... in the menu, as shown in the following screenshot: Adding evidence source to FTK Imager Select Image File in the Select Source dialog and click on Next. In the Select File dialog, browse to the location where you downloaded the sdcard.dd file, select it, and click on Finish, as shown in the following screenshot: Selecting the image file for analysis in FTK Imager FTK Imager's default display will appear with the contents of the SD card visible in the View pane at the lower right. You can also click on the Properties tab below the lower left pane to view the properties for the disk image. Now, on the left pane, the drive has opened. You can open folders by clicking on the + sign. When highlighting the folder, contents are shown on the right pane. When a file is selected, its contents can be seen on the bottom pane. As shown in the following screenshot, the deleted files will have a red X over the icon derived from their file extension: Deleted files shown with red X over the icons As shown in the following screenshot, to export the file, right-click on the file that contains the picture and select Export Files...: Sometimes, only a fragment of the file is recoverable, which cannot be read or viewed directly. In that case, we need to look through free or unallocated space for more data. Carving can be used to recover files from free and unallocated space. PhotoRec is one of the tools that can help you to do that. You will learn more about file carving with PhotoRec in the following sections. Recovering deleted data from internal memory Recovering files deleted from Android's internal memory, such as app data and so on, is not as easy as recovering such data from SD cards and SQLite databases, but, of course, it's not impossible. Many commercial forensic tools are capable of recovering deleted data from Android devices, of course, if a physical acquisition is possible and the user data partition isn't encrypted.  But this is not very common for modern devices, especially those running most recent versions of the operating system, such as Oreo and Pie. Most Android devices, especially modern smartphones, and tablets, use the EXT4 file system to organize data in their internal storage. This file system is very common for Linux-based devices. So, if we want to recover deleted data from the device's internal storage, we need a tool capable of recovering deleted files from the EXT4 file system. One such tool is extundelete. The tool is available for downloading here: http://extundelete.sourceforge.net/. To recover the contents of an inode, extundelete searches a file system's journal for an old copy of that inode. Information contained in the inode helps the tool to locate the file within the file system. To recover not only the file's contents, but also its name, extundelete is able to search the deleted entries in a directory to match the inode number of a file to a file name. To use this tool, you will need a Linux workstation. Most forensic Linux distributions have it already on board. For example, the following is a screenshot from SIFT Workstation—a popular digital forensics and incident response Linux distribution created by Rob Lee and his team from the SANS Institute (https://digital-forensics.sans.org/community/downloads): extundelete command-line options Before you can start the recovery process, you will need to mount a previously imaged userdata partition. In this example, we are going to use an Android device imaged via the chip-off technique. First of all, we need to determine the location of the userdata partition within the image. To do this, we can use mmls from the Sleuth Kit, as shown in the following screenshot: Android device partitions As you can see in the screenshot, the userdata partition is the last one and starts in sector 9199616. To make sure the userdata partition is EXT4 formatted, let's use fsstat, as shown in the following example: A part of fsstat output All you need now is to mount the userdata partition and run extundelete against it, as shown in the following example: extundelete /userdata/partition/mount/point --restore-all All recovered files will be saved to a subdirectory of the current directory named RECOVERED_FILES. If you are interested in recovering files before or after the specified date, you can use the --before date and --after-date options. It's important to note that these dates must be in UNIX Epoch format. There are quite a lot of both online and offline tools capable of converting timestamps, for example, you can use https://www.epochconverter.com/. As you can see, this method isn't very easy and fast, but there is a better way: using Autopsy, an open source digital forensic tool In the following example, we used a built-in file extension filter to find all the images on the Android device, and found a lot of deleted artifacts: Recovering deleted files from an EXT4 partition with Autopsy Summary Data recovery is the process of retrieving deleted data from the device and thus is a very important concept in forensics. In this chapter, we have seen various techniques to recover deleted data from both the SD card and the internal memory. While recovering the data from a removable SD card is easy, recovering data from internal memory involves a few complications. SQLite file parsing and file carving techniques aid a forensic analyst in recovering the deleted items present in the internal memory of an Android device. In order to understand the forensic perspective and the analysis of Android apps, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 43125
Modal Close icon
Modal Close icon