Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-vulnerabilities-in-the-application-and-transport-layer-of-the-tcp-ip-stack
Melisha Dsouza
07 Feb 2019
15 min read
Save for later

Vulnerabilities in the Application and Transport Layer of the TCP/IP stack

Melisha Dsouza
07 Feb 2019
15 min read
The Transport layer is responsible for end-to-end data communication and acts as an interface for network applications to access the network. This layer also takes care of error checking, flow control, and verification in the TCP/IP  protocol suite. The Application Layer handles the details of a particular application and performs 3 main tasks- formatting data, presenting data and transporting data.  In this tutorial, we will explore the different types of vulnerabilities in the Application and Transport Layer. This article is an excerpt from a book written by Glen D. Singh, Rishi Latchmepersad titled CompTIA Network+ Certification Guide This book covers all CompTIA certification exam topics in an easy-to-understand manner along with plenty of self-assessment scenarios for better preparation. This book will not only prepare you conceptually but will also help you pass the N10-007 exam. Vulnerabilities in the Application Layer The following are some of the application layer protocols which we should pay close attention to in our network: File Transfer Protocol (FTP) Telnet Secure Shell (SSH) Simple Mail Transfer Protocol (SMTP) Domain Name System (DNS) Dynamic Host Configuration Protocol (DHCP) Hypertext Transfer Protocol (HTTP) Each of these protocols was designed to provide the function it was built to do and with a lesser focus on security. Malicious users and hackers are able to compromise both the application that utilizes these protocols and the network protocols themselves. Cross Site Scripting (XSS) XSS focuses on exploiting a weakness in websites. In an XSS attack, the malicious user or hacker injects client-side scripts into a web page/site that a potential victim would trust. The scripts can be JavaScript, VBScript, ActiveX, and HTML, or even Flash (ActiveX), which will be executed on the victim's system. These scripts will be masked as legitimate requests between the web server and the client's browser. XSS focuses on the following: Redirecting a victim to a malicious website/server Using hidden Iframes and pop-up messages on the victim's browser Data manipulation Data theft Session hijacking Let's take a deeper look at what happens in an XSS attack: An attacker injects malicious code into a web page/site that a potential victim trusts. A trusted site can be a favorite shopping website, social media platform, or school or university web portal. A potential victim visits the trusted site. The malicious code interacts with the victim's web browser and executes. The web browser is usually unable to determine whether the scripts are malicious or not and therefore still executes the commands. The malicious scripts can be used obtain cookie information, tokens, session information, and so on about other websites that the browser has stored information about. The acquired details (cookies, tokens, sessions ID, and so on) are sent back to the hacker, who in turn uses them to log in to the sites that the victim's browser has visited: There are two types of XSS attacks: Stored XSS (persistent) Reflected (non-persistent) Stored XSS (persistent): In this attack, the attacker injects a malicious script directly into the web application or a website. The script is stored permanently on the page, so when a potential victim visits the compromised page, the victim's web browser will parse all the code of the web page/application fine. Afterward, the script is executed in the background without the victim's knowledge. At this point, the script is able to retrieve session cookies, passwords, and any other sensitive information stored in the user's web browser, and sends the loot back to the attacker in the background. Reflective XSS (non-persistent): In this attack, the attacker usually sends an email with the malicious link to the victim. When the victim clicks the link, it is opened in the victim's web browser (reflected), and at this point, the malicious script is invoked and begins to retrieve the loot (passwords, credit card numbers, and so on) stored in the victim's web browser. SQL injection (SQLi) SQLi attacks focus on parsing SQL commands into an SQL database that does not validate the user input. The attacker attempts to gain unauthorized access to a database either by creating or retrieving information stored in the database application. Nowadays, attackers are not only interested in gaining access, but also in retrieving (stealing) information and selling it to others for financial gain. SQLi can be used to perform: Authentication bypass: Allows the attacker to log in to a system without a valid user credential Information disclosure: Retrieves confidential information from the database Compromise data integrity: The attacker is able to manipulate information stored in the database Lightweight Directory Access Protocol (LDAP) injection LDAP is designed to query and update directory services, such as a database like Microsoft Active Directory. LDAP uses both TCP and UDP port 389 and LDAP uses port 636. In an LDAP injection attack, the attacker exploits the vulnerabilities within a web application that constructs LDAP messages or statements, which are based on the user input. If the receiving application does not validate or sanitize the user input, this increases the possibility of manipulating LDAP messages. Cross-Site Request Forgery (CSRF) This attack is a bit similar to the previously mentioned XSS attack. In a CSRF attack, the victim machine/browser is forced to execute malicious actions against a website with which the victim has been authenticated (a website that trusts the actions of the user). To have a better understanding of how this attack works, let's visualize a potential victim, Bob. On a regular day, Bob visits some of his favorite websites, such as various blogs, social media platforms, and so on, where he usually logs in automatically to view the content. Once Bob logs in to a particular website, the website would automatically trust the transactions between itself and the authenticated user, Bob. One day, he receives an email from the attacker but unfortunately Bob does not realize the email is a phishing/spam message and clicks on the link within the body of the message. His web browser opens the malicious URL in a new tab: The attack would cause Bob's machine/web browser to invoke malicious actions on the trusted website; the website would see all the requests are originating from Bob. The return traffic such as the loot (passwords, credit card details, user account, and so on) would be returned to the attacker. Session hijacking When a user visits a website, a cookie is stored in the user's web browser. Cookies are used to track the user's preferences and manage the session while the user is on the site. While the user is on the website, a session ID is also set within the cookie, and this information may be persistent, which allows a user to close the web browser and then later revisit the same website and automatically log in. However, the web developer can set how long the information is persistent for, whether it expires after an hour or a week, depending on the developer's preference. In a session hijacking attack, the attacker can attempt to obtain the session ID while it is being exchanged between the potential victim and the website. The attacker can then use this session ID of the victim on the website, and this would allow the attacker to gain access to the victim's session, further allowing access to the victim's user account and so on. Cookie poisoning A cookie stores information about a user's preferences while he/she is visiting a website. Cookie poisoning is when an attacker has modified a victim's cookie, which will then be used to gain confidential information about the victim such as his/her identity. DNS Distributed Denial-of-Service (DDoS) A DDoS attack can occur against a DNS server. Attacker sometimes target Internet Service Providers (ISPs) networks, public and private Domain Name System (DNS) servers, and so on to prevent other legitimate users from accessing the service. If a DNS server is unable to handle the amount of requests coming into the server, its performance will eventually begin to degrade gradually, until it either stops responding or crashes. This would result in a Denial-of-Service (DoS) attack. Registrar hijacking Whenever a person wants to purchase a domain, the person has to complete the registration process at a domain registrar. Attackers do try to compromise users accounts on various domain registrar websites in the hope of taking control of the victim's domain names. With a domain name, multiple DNS records can be created or modified to direct incoming requests to a specific device. If a hacker modifies the A record on a domain to redirect all traffic to a compromised or malicious server, anyone who visits the compromised domain will be redirected to the malicious website. Cache poisoning Whenever a user visits a website, there's the process of resolving a host name to an IP address which occurs in the background. The resolved data is stored within the local system in a cache area. The attacker can compromise this temporary storage area and manipulate any further resolution done by the local system. Typosquatting McAfee outlined typosquatting, also known as URL hijacking, as a type of cyber-attack that allows an attacker to create a domain name very close to a company's legitimate domain name in the hope of tricking victims into visiting the fake website to either steal their personal information or distribute a malicious payload to the victim's system. Let's take a look at a simple example of this type of attack. In this scenario, we have a user, Bob, who frequently uses the Google search engine to find his way around the internet. Since Bob uses the www.google.com website often, he sets it as his homepage on the web browser so each time he opens the application or clicks the Home icon, www.google.com is loaded onto the screen. One day Bob decides to use another computer, and the first thing he does is set his favorite search engine URL as his home page. However, he typed www.gooogle.com and didn't realize it. Whenever Bob visits this website, it looks like the real website. Since the domain was able to be resolved to a website, this is an example of how typosquatting works. It's always recommended to use a trusted search engine to find a URL for the website you want to visit. Trusted internet search engine companies focus on blacklisting malicious and fake URLs in their search results to help protect internet users such as yourself. Vulnerabilities at the Transport Layer In this section, we are going to discuss various weaknesses that exist within the underlying protocols of the Transport Layer. Fingerprinting In the cybersecurity world, fingerprinting is used to discover open ports and services that are running open on the target system. From a hacker's point of view, fingerprinting is done before the exploitation phase, as the more information a hacker can obtain about a target, the hacker can then narrow its attack scope and use specific tools to increase the chances of successfully compromising the target machine. This technique is also used by system/network administrators, network security engineers, and cybersecurity professionals alike. Imagine you're a network administrator assigned to secure a server; apart from applying system hardening techniques such as patching and configuring access controls, you would also need to check for any open ports that are not being used. Let's take a look at a more practical approach to fingerprinting in the computing world. We have a target machine, 10.10.10.100, on our network. As a hacker or a network security professional, we would like to know which TCP and UDP ports are open, the services that use the open ports, and the service daemon running on the target system. In the following screenshot, we've used nmap to help us discover the information we are seeking. The NMap tools delivers specially crafted probes to a target machine: Enumeration In a cyber attack, the hacker uses enumeration techniques to extract information about the target system or network. This information will aid the attacker in identifying system attack points. The following are the various network services and ports that stand out for a hacker: Port 53: DNS zone transfer and DNS enumeration Port 135: Microsoft RPC Endpoint Mapper Port 25: Simple Mail Transfer Protocol (SMTP) DNS enumeration DNS enumeration is where an attacker is attempting to determine whether there are other servers or devices that carry the domain name of an organization. Let's take a look at how DNS enumeration works. Imagine we are trying to find out all the publicly available servers Google has on the internet. Using the host utility in Linux and specifying a hostname, host www.google.com, we can see the IP address 172.217.6.196 has been resolved successfully. This means there's a device with a host name of www.google.com active. Furthermore, if we attempt to resolve the host name, gmail.google.com, another IP address is presented but when we attempt to resolve mx.google.com, no IP address is given. This is an indication that there isn't an active device with the mx.google.com host name: DNS zone transfer DNS zone transfer allows the copying of the master file from a DNS server to another DNS server. There are times when administrators do not configure the security settings on their DNS server properly, which allows an attacker to retrieve the master file containing a list of the names and addresses of a corporate network. Microsoft RPC Endpoint Mapper Not too long ago, CVE-2015-2370 was recorded on the CVE database. This vulnerability took advantage of the authentication implementation of the Remote Procedure Call (RPC) protocol in various versions of the Microsoft Windows platform, both desktop and server operating systems. A successful exploit would allow an attacker to gain local privileges on a vulnerable system. SMTP SMTP is used in mail servers, as with the POP and the Internet Message Access Protocol (IMAP). SMTP is used for sending mail, while POP and IMAP are used to retrieve mail from an email server. SMTP supports various commands, such as EXPN and VRFY. The EXPN command can be used to verify whether a particular mailbox exists on a local system, while the VRFY command can be used to validate a username on a mail server. An attacker can establish a connection between the attacker's machine and the mail server on port 25. Once a successful connection has been established, the server will send a banner back to the attacker's machine displaying the server name and the status of the port (open). Once this occurs, the attacker can then use the VRFY command followed by a user name to check for a valid user on the mail system using the VRFY bob syntax. SYN flooding One of the protocols that exist at the Transport Layer is TCP. TCP is used to establish a connection-oriented session between two devices that want to communication or exchange data. Let's recall how TCP works. There are two devices that want to exchange some messages, Bob and Alice. Bob sends a TCP Synchronization (SYN) packet to Alice, and Alice responds to Bob with a TCP Synchronization/Acknowledgment (SYN/ACK) packet. Finally, Bob replies with a TCP Acknowledgement (ACK) packet. The following diagram shows the TCP 3-Way Handshake mechanism: For every TCP SYN packet received on a device, a TCP ACK packet must be sent back in response. One type of attack that takes advantage of this design flaw in TCP is known as a SYN Flood attack. In a SYN Flood attack, the attacker sends a continuous stream of TCP SYN packets to a target system. This would cause the target machine to process each individual packet and response accordingly; eventually, with the high influx of TCP SYN packets, the target system will become too overwhelmed and stop responding to any requests: TCP reassembly and sequencing During a TCP transmission of datagrams between two devices, each packet is tagged with a sequence number by the sender. This sequence number is used to reassemble the packets back into data. During the transmission of packets, each packet may take a different path to the destination. This may cause the packets to be received in an out-of-order fashion, or in the order they were sent over the wire by the sender. An attacker can attempt to guess the sequencing numbers of packets and inject malicious packets into the network destined for the target. When the target receives the packets, the receiver would assume they came from the real sender as they would contain the appropriate sequence numbers and a spoofed IP address. Summary In this article, we have explored the different types of vulnerabilities that exist at the Application and Transport Layer of the TCP/IP protocol suite. To understand other networking concepts like network architecture, security, network monitoring, and troubleshooting; and ace the CompTIA certification exam, check out our book CompTIA Network+ Certification Guide AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 32800

article-image-docker-has-turned-us-all-sysadmins
Richard Gall
29 Dec 2015
5 min read
Save for later

Docker has turned us all into sysadmins

Richard Gall
29 Dec 2015
5 min read
Docker has been one of my favorite software stories of the last couple of years. On the face of it, it should be pretty boring. Containerization isn't, after all, as revolutionary as most of the hype around Docker would have you believe. What's actually happened is that Docker has refined the concept, and found a really clear way of communicating the idea. Deploying applications and managing your infrastructure doesn't sound immediately 'sexy'. After all, it was data scientist that was proclaimed the sexiest job of the twenty-first century; sysadmins hardly got an honorable mention. But Docker has, amazingly, changed all that. It's started to make sysadmins sexy… And why should we be surprised? If a SysAdmin's role is all about delivering software, managing infrastructures, maintaining it and making sure it performs for the people using it, it's vital (if not obviously sexy). A decade ago, when software architectures were apparently immutable and much more rigid, the idea of administration wasn't quite so crucial. But now, in a world of mobile and cloud, where technology is about mobility as much as it is about stability (in the past, tech glued us to desktops; now it's encouraging us to work in the park), sysadmins are crucial. Tools like Docker are crucial to this. By letting us isolate and package applications in their component pieces we can start using software in a way that's infinitely more agile and efficient. Where once the focus was on making sure software was simply 'there,' waiting for us to use it, it's now something that actively invites invention, reconfiguration and exploration. Docker's importance to the 'API economy' (which you're going to be hearing a lot more about in 2016) only serves to underline its significance to modern software. Not only does it provide 'a convenient way to package API-provisioning applications, but it also 'makes the composition of API-providing applications more programmatic', as this article on InfoWorld has it. Essentially, it's a tool that unlocks and spreads value. Can we, then, say the same about the humble sysadmin? Well yes – it's clear that administering systems is no longer a matter of simple organization, a question of robust management, but is a business critical role that can be the difference between success and failure. However, what this paradigm shift really means is that we've all become SysAdmins. Whatever role we're working in, we're deeply conscious of the importance of delivery and collaboration. It's not something we expect other people to do, it's something that we know is crucial. And it's for that reason that I love Docker – it's being used across the tech world, a gravitational pull bringing together disparate job roles in a way that's going to become more and more prominent over the next 12 months. Let's take a look at just two of the areas in which Docker is going to have a huge impact. Docker in web development Web development is one field where Docker has already taken hold. It's changing the typical web development workflow, arguably making web developers more productive. If you build in a single container on your PC, that container can then be deployed and managed anywhere. It also gives you options: you can build different services in different containers, or you can build a full-stack application in a single container (although Docker purists might say you shouldn't). In a nutshell, it's this ability to separate an application into its component parts that underlines why microservices are fundamental to the API economy. It means different 'bits' – the services – can be used and shared between different organizations. Fundamentally though, Docker bridges the difficult gap between development and deployment. Instead of having to worry about what happens once it has been deployed, when you build inside a container you can be confident that you know it's going to work – wherever you deploy it. With Docker, delivering your product is easier (essentially, it helps developers manage the 'ops' bit of DevOps, in a simpler way than tackling the methodology in full); which means you can focus on the specific process of development and optimizing your products. Docker in data science Docker's place within data science isn't quite as clearly defined or fully realised as it is in web development. But it's easy to see why it would be so useful to anyone working with data. What I like is that with Docker, you really get back to the 'science' of data science – it's the software version of working in a sterile and controlled environment. This post provides a great insight on just how great Docker is for data – admittedly it wasn't something I had thought that much about, but once you do, it's clear just how simple it is. As the author of puts it: 'You can package up a model in a Docker container, go have that run on some data and return some results - quickly. If you change the model, you can know that other people will be able to replicate the results because of the containerization of the model.' Wherever Docker rears its head, it's clearly a tool that can be used by everyone. However you identify – web developer, data scientist, or anything else for that matter – it's worth exploring and learning how to apply Docker to your problems and projects. Indeed, the huge range of Docker use cases is possibly one of the main reasons that Docker is such an impressive story – the fact that there are thousands of other stories all circulating around it. Maybe it's time to try it and find out what it can do for you?
Read more
  • 0
  • 0
  • 32572

article-image-newsql-what-hype-about
Amey Varangaonkar
06 Nov 2017
6 min read
Save for later

NewSQL: What the hype is all about

Amey Varangaonkar
06 Nov 2017
6 min read
First, there was data. Data became database. Then came SQL. Next came NoSQL. And now comes NewSQL. NewSQL Origins For decades, relational database or SQL was the reigning data management standard in enterprises all over the world. With the advent of Big Data and cloud-based storage rose the need for a faster, more flexible and scalable data management system, which didn’t necessarily comply with the SQL standards of ACID compliance. This was popularly dubbed as NoSQL, and databases like MongoDB, Neo4j, and others gained prominence in no time. We can attribute the emergence and eventual adoption of NoSQL databases to a couple of very important factors. The high costs and lack of flexibility of the traditional relational databases drove many SQL users away. Also, NoSQL databases are mostly open source, and their enterprise versions are comparatively cheaper too. They are schema-less meaning they can be used to manage unstructured data effectively. In addition, they can scale well horizontally - i.e. you could add more machines to increase computing power and use it to handle high volumes of data. All these features of NoSQL come with an important tradeoff, however - these systems can’t simultaneously ensure total consistency. Of late, there has been a rise in another type of database systems, with the aim to combine ‘the best of both the worlds’. Popularly dubbed as ‘NewSQL’, this system promises to combine the relational data model of SQL and the scalability and speed of NoSQL. NewSQL - The dark horse in the databases race NewSQL is ‘SQL on Steroids’, say many. This is mainly because all NewSQL systems start with the relational data model and the SQL query language, but also incorporate the features that have led to the rise of NoSQL - addressing the issues of scalability, flexibility, and high performance. They offer the assurance of ACID transactions like in the relational models. However, what makes them really unique is that they allow the horizontal scaling functionality of NoSQL, and can process large volumes of data with high performance and reliability. This is why businesses really like the concept of NewSQL - the performance of NoSQL and the reliability and consistency of the SQL model, all packed in one. To understand what the hype surrounding NewSQL is all about, it’s worth comparing NewSQL database systems with the traditional SQL and NoSQL database systems, and see where they stand out: Characteristic Relational (SQL) NoSQL NewSQL ACID compliance Yes No Yes OLTP/OLAP support Yes No Yes Rigid Schema Structure Yes No In some cases Support for unstructured data No Yes In some cases Performance with large data Moderate Fast Very fast Performance overhead Huge Moderate Minimal Support from Community Very high High Low   As we can see from the table above, NewSQL really comes through as the best when you’re dealing with larger datasets with a desire to lower performance overheads. To give you a practical example, consider an organization that has to work with a large number of short transactions, access a limited amount of data, but executes those queries repeatedly. For such organizations, a NewSQL database system would be a perfect fit. These features are leading to the gradual growth of NewSQL systems. However, it will take some time for more industries to adopt them. Not all NewSQL databases are created equal Today, one has a host of NewSQL solutions to choose from. Some popular solutions are Clustrix, MemSQL, VoltDB and CockroachDB.  Cloud Spanner, the latest NewSQL offering by Google, became generally available in February 2017 - indicating Google’s interest in the NewSQL domain and the value a NewSQL database can offer to their existing cloud offerings. It is important to understand that there are significant differences among these various NewSQL solutions. As such you should choose a NewSQL solution carefully after evaluating your organization’s data requirements and problems. As this article on Dataconomy points out, while some databases handle transactional workloads well, they do not offer the benefit of native clustering - SAP HANA is one such example. NuoDB focuses on cloud deployments, but its overall throughput is found to be rather sub-par. MemSQL is a suitable choice when it comes to clustered analytics but falls short when it comes to consistency. Thus, the choice of the database purely depends on the task you want to do, and what trade-offs you are ready to allow without letting it affect your workflow too much. DBAs and Programmers in the NewSQL world Regardless of which database system an enterprise adopts, the role of DBAs will continue to be important going forward. Core database administration and maintenance tasks such as backup, recovery, replication, etc. will need to be taken care of. The major challenge for the NewSQL DBAs will be in choosing and then customizing the right database solution that fits the organizational requirements. Some degree of capacity planning and overall database administration skills might also have to be recalibrated. Likewise, NewSQL database programmers may find themselves dealing with data manipulation and querying tasks similar to those faced while working with traditional database systems. But NewSQL programmers will be doing these tasks at a much larger, or shall we say, at a more ‘distributed’ scale. In conclusion When it comes to solving a particular problem related to data management, it’s often said that 80% of the solution comes down to selecting the right tool, and 20% is about understanding the problem at hand! In order to choose the right database system for your organization, you must ask yourself these two questions: What is the nature of the data you will work with? What are you willing to trade-off? In other words, how important are factors such as the scalability and performance of the database system? For example, if you primarily work with mostly transactional data with a priority on high performance and high scalability, then NewSQL databases might fit your bill just perfectly. If you’re going to work with volatile data, NewSQL might help you there as well, however, there are better NoSQL solutions to tackle your data problem. As we have seen earlier, NewSQL databases have been designed to combine the advantages and power of both relational and NoSQL systems. It is important to know that NewSQL databases are not designed to replace either NoSQL or SQL relational models. They are rather intentionally-built alternatives for data processing, which mask the flaws and shortcomings of both relational and nonrelational database systems. The ultimate goal of NewSQL is to deliver a high performance, highly available solution to handle modern data, without compromising on data consistency and high-speed transaction capabilities.
Read more
  • 0
  • 0
  • 32503

article-image-when-why-and-how-to-use-graph-analytics-for-your-big-data
Sunith Shetty
20 Dec 2017
10 min read
Save for later

When, why and how to use Graph analytics for your big data

Sunith Shetty
20 Dec 2017
10 min read
[box type="note" align="" class="" width=""]This article is an excerpt taken from a book Big Data Analytics with Java written by Rajat Mehta. In this book, you will learn how to perform real-time streaming analytics on big data using machine learning algorithms and power of Java. [/box] From the article given below, you will learn why graph analytics is a favourable choice in order to analyze complex datasets. Graph analytics Vs Relational Databases The biggest advantage to using graphs is you can analyze these graphs and use them for analyzing complex datasets. You might ask what is so special about graph analytics that we can’t do by relational databases. Let’s try to understand this using an example, suppose we want to analyze your friends network on Facebook and pull information about your friends such as their name, their birth date, their recent likes, and so on. If Facebook had a relational database, then this would mean firing a query on some table using the foreign key of the user requesting this info. From the perspective of relational database, this first level query is easy. But what if we now ask you to go to the friends at level four in your network and fetch their data (as shown in the following diagram). The query to get this becomes more and more complicated from a relational database perspective but this is a trivial task on a graph or graphical database (such as Neo4j). Graphs are extremely good on operations where you want to pull information from one end of the node to another, where the other node lies after a lot of joins and hops. As such, graph analytics is good for certain use cases (but not for all use cases, relational database are still good on many other use cases): As you can see, the preceding diagram depicts a huge social network (though the preceding diagram might just be depicting a network of a few friends only). The dots represent actual people in a social network. So if somebody asks to pick one user on the left-most side of the diagram and see and follow host connections to the right-most side and pull the friends at the say 10th level or more, this is something very difficult to do in a normal relational database and doing it and maintaining it could easily go out of hand. There are four particular use cases where graph analytics is extremely useful and used frequently (though there are plenty more use cases too): Path analytics: As the name suggests, this analytics approach is used to figure out the paths as you traverse along the nodes of a graph. There are many fields where this can be used—simplest being road networks and figuring out details such as shortest path between cities, or in flight analytics to figure out the shortest time taking flight or direct flights. Connectivity analytics: As the name suggests, this approach outlines how the nodes within a graph are connected to each other. So using this you can figure out how many edges are flowing into a node and how many are flowing out of the node. This kind of information is very useful in analysis. For example, in a social network if there is a person who receives just one message but gives out say ten messages within his network then this person can be used to market his favorite products as he is very good in responding to messages. Community Analytics: Some graphs on big data are huge. But within these huge graphs there might be nodes that are very close to each other and are almost stacked in a cluster of their own. This is useful information as based on this you can extract out communities from your data. For example, in a social network if there are people who are part of some community, say marathon runners, then they can be clubbed into a single community and further tracked. Centrality Analytics: This kind of analytical approach is useful in finding central nodes in a network or graph. This is useful in figuring out sources that are single handedly connected to many other sources. It is helpful in figuring out influential people in a social network, or a central computer in a computer network. From the perspective of this article, we will be covering some of these use cases in our sample case studies and for this we will be using a library on Apache Spark called GraphFrames. GraphFrames GraphX library is advanced and performs well on massive graphs, but, unfortunately, it’s currently only implemented in Scala and does not have any direct Java API. GraphFrames is a relatively new library that is built on top of Apache Spark and provides support for dataframe (now dataset) based graphs. It contains a lot of methods that are direct wrappers over the underlying sparkx methods. As such it provides similar functionality as GraphX except that GraphX acts on the Spark SRDD and GraphFrame works on the dataframe so GraphFrame is more user friendly (as dataframes are simpler to use). All the advantages of firing Spark SQL queries, joining datasets, filtering queries are all supported on this. To understand GraphFrames and representing massive big data graphs, we will take small baby steps first by building some simple programs using GraphFrames before building full-fledged case studies. First, let’s see how to build a graph using Spark and GraphFrames on some sample dataset. Building a graph using GraphFrames Consider that you have as simple graph as shown next. This graph depicts four people Kai, John, Tina, and Alex and the relation they share whether they follow each other or are friends. We will now try to represent this basic graph using the GraphFrame library on top of Apache Spark and in the meantime, we will also start learning the GraphFrame API. Since GraphFrame is a module on top of Spark, let’s first build the Spark configuration and spark sql context for brevity: SparkConfconf= ... JavaSparkContextsc= ... SQLContextsqlContext= ... We will now build the JavaRDD object that will contain the data for our vertices or the people Kai, John, Alex, and Tina in this small network. We will create some sample data using the RowFactory class of Spark API and provide the attributes (ID of the person, and their name and age) that we need per row of the data: JavaRDD<Row>verRow = sc.parallelize(Arrays.asList(RowFactory.create(101L,”Kai”,27),        RowFactory.create(201L,”John”,45),   RowFactory.create(301L,”Alex”,32),        RowFactory.create(401L,”Tina”,23))); Next we will define the structure or schema of the attributes used to build the data. The ID of the person is of type long and the name of the person is a string, and the age of the person is an integer as shown next in the code: List<StructField>verFields = newArrayList<StructField>();  verFields.add(DataTypes.createStructField(“id”,DataTypes.LongType, true));  verFields.add(DataTypes.createStructField(“name”,DataTypes.StringType, true));      verFields.add(DataTypes.createStructField(“age”,DataTypes.IntegerType, true)); Now, let’s build the sample data for the relations between these people and this can basically be represented as the edges of the graph later. This data item of relationship will have the IDs of the persons that are connected together and the type of relationship they share (that is friends or followers). Again we will use the Spark provided RowFactory and build some sample data per row and create the JavaRDD with this data: JavaRDD<Row>edgRow = sc.parallelize(Arrays.asList(        RowFactory.create(101L,301L,”Friends”),        RowFactory.create(101L,401L,”Friends”),        RowFactory.create(401L,201L,”Follow”),        RowFactory.create(301L,201L,”Follow”),        RowFactory.create(201L,101L,”Follow”))); Again, define the schema of the attributes added as part of the edges earlier. This schema is later used in building the dataset for the edges. The attributes passed are the source ID of the node, destination ID of the other node, as well as the relationType, which is a string: List<StructField>EdgFields = newArrayList<StructField>();    EdgFields.add(DataTypes.createStructField(“src”,DataTypes.LongType,true));  EdgFields.add(DataTypes.createStructField(“dst”,DataTypes.LongType,true));  EdgFields.add(DataTypes.createStructField(“relationType”,DataTypes.StringType,true)); Using the schemas that we have defined for the vertices and edges, let’s now build the actual dataset for the vertices and the edges. For this, first create the StructType  object that holds the schema details for the vertices and the edges data and using this structure and the actual data we will next build the dataset of the verticles (verDF) and the dataset for the edges (edgDF): StructTypeverSchema = DataTypes.createStructType(verFields); StructTypeedgSchema = DataTypes.createStructType(EdgFields);    Dataset<Row>verDF = sqlContext.createDataFrame(verRow, verSchema); Dataset<Row>edgDF = sqlContext.createDataFrame(edgRow, edgSchema); Finally, we will now use the vertices and the edges dataset and pass it as a parameter to the GraphFrame constructor and build the GraphFrame instance: GraphFrameg = newGraphFrame(verDF,edgDF); Time has now come to see some mild analytics on the graph we just created. Let’s first visualize our data for the graphs; let’s see the data on the vertices. For this, we will invoke the vertices method on the GraphFrame instance and invoke the standard show method on the generated vertices dataset (GraphFrame would generate a new dataset when the vertices method is invoked). g.vertices().show(); This would print the output as follows: Let’s also see the data on the edges: g.edges().show(); This would print as the output as follows: Let’s also see the number of edges and the number of vertices: System.out.println(“Number of Vertices : “ + g.vertices().count()); System.out.println(“Number of Edges : “ + g.edges().count()); This would print the result as follows: Number of Vertices : 4 Number of Edges : 5 GraphFrame has a handy method to find all the indegrees (out degree or degree) g.inDegrees().show(); This would print the in degrees of all the vertices as shown next: Finally, let’s see one more small thing on this simple graph. As GraphFrames work on the datasets, all the dataset handy methods such as filtering, map, and so on can be applied on them. We will use the filter method and run it on the vertices dataset to figure out the people in the graph with age greater than thirty: g.vertices().filter(“age > 30”).show(); This would print the result as follows: From this post, we learned about graph analytics. We saw how graphs can be built from massive big datasets in order to derive quick insights. You will understand when to implement graph analytics or relational database based on the growing challenges in your organization. To know more about preparing and refining big data and to perform smart data analytics using machine learning algorithms you can refer to the book Big Data Analytics with Java.
Read more
  • 0
  • 0
  • 32361

article-image-exploring-net-core-3-0-components-with-mark-j-price-a-microsoft-specialist
Packt Editorial Staff
15 Nov 2019
8 min read
Save for later

Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist

Packt Editorial Staff
15 Nov 2019
8 min read
There has been continuous transformation since the last few years to bring .NET to platforms other than Windows. .NET Core 3.0 released in September 2019 with primary focus on adding Windows specific features. .NET Core 3.0 supports side-by-side and app-local deployments, a fast JSON reader, serial port access and other PIN access for Internet of Things (IoT) solutions, and tiered compilation on by default. In this article we will explore the .Net Core components of its new 3.0 release. This article is an excerpt from the book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. Mark follows a step-by-step approach in the book filled with exciting projects and fascinating theory for the readers in this highly acclaimed franchise.  Pieces of .NET Core components These are pieces that play an important role in the development of the .NET Core: Language compilers: These turn your source code written with languages such as C#, F#, and Visual Basic into intermediate language (IL) code stored in assemblies. With C# 6.0 and later, Microsoft switched to an open source rewritten compiler known as Roslyn that is also used by Visual Basic. Common Language Runtime (CoreCLR): This runtime loads assemblies, compiles the IL code stored in them into native code instructions for your computer's CPU, and executes the code within an environment that manages resources such as threads and memory. Base Class Libraries (BCL) of assemblies in NuGet packages (CoreFX): These are prebuilt assemblies of types packaged and distributed using NuGet for performing common tasks when building applications. You can use them to quickly build anything you want rather combining LEGO™ pieces. .NET Core 2.0 implemented .NET Standard 2.0, which is a superset of all previous versions of .NET Standard, and lifted .NET Core up to parity with .NET Framework and Xamarin. .NET Core 3.0 implements .NET Standard 2.1, which adds new capabilities and enables performance improvements beyond those available in .NET Framework. Understanding assemblies, packages, and namespaces An assembly is where a type is stored in the filesystem. Assemblies are a mechanism for deploying code. For example, the System.Data.dll assembly contains types for managing data. To use types in other assemblies, they must be referenced. Assemblies are often distributed as NuGet packages, which can contain multiple assemblies and other resources. You will also hear about metapackages and platforms, which are combinations of NuGet packages. A namespace is the address of a type. Namespaces are a mechanism to uniquely identify a type by requiring a full address rather than just a short name. In the real world, Bob of 34 Sycamore Street is different from Bob of 12 Willow Drive. In .NET, the IActionFilter interface of the System.Web.Mvc namespace is different from the IActionFilter interface of the System.Web.Http.Filters namespace. Understanding dependent assemblies If an assembly is compiled as a class library and provides types for other assemblies to use, then it has the file extension .dll (dynamic link library), and it cannot be executed standalone. Likewise, if an assembly is compiled as an application, then it has the file extension .exe (executable) and can be executed standalone. Before .NET Core 3.0, console apps were compiled to .dll files and had to be executed by the dotnet run command or a host executable. Any assembly can reference one or more class library assemblies as dependencies, but you cannot have circular references. So, assembly B cannot reference assembly A, if assembly A already references assembly B. The compiler will warn you if you attempt to add a dependency reference that would cause a circular reference. Understanding the Microsoft .NET Core App platform By default, console applications have a dependency reference on the Microsoft .NET Core App platform. This platform contains thousands of types in NuGet packages that almost all applications would need, such as the int and string types. When using .NET Core, you reference the dependency assemblies, NuGet packages, and platforms that your application needs in a project file. Let's explore the relationship between assemblies and namespaces. In Visual Studio Code, create a folder named test01 with a subfolder named AssembliesAndNamespaces, and enter dotnet new console to create a console application. Save the current workspace as test01 in the test01 folder and add the AssembliesAndNamespaces folder to the workspace. Open AssembliesAndNamespaces.csproj, and note that it is a typical project file for a .NET Core application, as shown in the following markup: Check out this code on GitHub. Although it is possible to include the assemblies that your application uses with its deployment package, by default the project will probe for shared assemblies installed in well-known paths. First, it will look for the specified version of .NET Core in the current user's .dotnet/store and .nuget folders, and then it looks in a fallback folder that depends on your OS, as shown in the following root paths: Windows: C:\Program Files\dotnet\sdk macOS: /usr/local/share/dotnet/sdk Most common .NET Core types are in the System.Runtime.dll assembly. You can see the relationship between some assemblies and the namespaces that they supply types for, and note that there is not always a one-to-one mapping between assemblies and namespaces, as shown in the following table: Assembly Example namespaces Example types System.Runtime.dll System, System.Collections, System.Collections.Generic Int32, String, IEnumerable<T> System.Console.dll System Console System.Threading.dll System.Threading Interlocked, Monitor, Mutex System.Xml.XDocument.dll System.Xml.Linq XDocument, XElement, XNode Understanding NuGet packages .NET Core is split into a set of packages, distributed using a Microsoft-supported package management technology named NuGet. Each of these packages represents a single assembly of the same name. For example, the System.Collections package contains the System.Collections.dll assembly. The following are the benefits of packages: Packages can ship on their own schedule. Packages can be tested independently of other packages. Packages can support different OSes and CPUs by including multiple versions of the same assembly built for different OSes and CPUs. Packages can have dependencies specific to only one library. Apps are smaller because unreferenced packages aren't part of the distribution. The following table lists some of the more important packages and their important types: Package Important types System.Runtime Object, String, Int32, Array System.Collections List<T>, Dictionary<TKey, TValue> System.Net.Http HttpClient, HttpResponseMessage System.IO.FileSystem File, Directory System.Reflection Assembly, TypeInfo, MethodInfo Understanding frameworks There is a two-way relationship between frameworks and packages. Packages define the APIs, while frameworks group packages. A framework without any packages would not define any APIs. .NET packages each support a set of frameworks. For example, the System.IO.FileSystem package version 4.3.0 supports the following frameworks: .NET Standard, version 1.3 or later. .NET Framework, version 4.6 or later. Six Mono and Xamarin platforms (for example, Xamarin.iOS 1.0). Understanding dotnet commands When you install .NET Core SDK, it includes the command-line interface (CLI) named dotnet. Creating new projects The dotnet command-line interface has commands that work on the current folder to create a new project using templates. In Visual Studio Code, navigate to Terminal. Enter the dotnet new -l command to list your currently installed templates, as shown in the following screenshot: Managing projects The dotnet CLI has the following commands that work on the project in the current folder, to manage the project: dotnet restore: This downloads dependencies for the project. dotnet build: This compiles the project. dotnet test: This runs unit tests on the project. dotnet run: This runs the project. dotnet pack: This creates a NuGet package for the project. dotnet publish: This compiles and then publishes the project, either with dependencies or as a self-contained application. add: This adds a reference to a package or class library to the project. remove: This removes a reference to a package or class library from the project. list: This lists the package or class library references for the project. To summarize, we explored the .NET Core components of the new 3.0 release. If you want to learn the fundamentals, build practical applications, and explore latest features of C# 8.0 and .NET Core 3.0, check out our latest book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. .NET Framework API Porting Project concludes with .NET Core 3.0 .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 .NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies, optimizing applications ASP.NET Core and Blazor Inspecting APIs in ASP.NET Core [Tutorial] Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 0
  • 32129

article-image-sex-robots-artificial-intelligence-and-ethics-how-desire-shapes-and-is-shaped-by-algorithms
Richard Gall
19 Sep 2018
9 min read
Save for later

Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms

Richard Gall
19 Sep 2018
9 min read
The ethics of artificial intelligence seems to have found its way into just about every corner of public life. From law enforcement to justice, through to recruitment, artificial intelligence is both impacting both the work we do and the way we think. But if you really want to get into the ethics of artificial intelligence you need to go further than the public realm and move into the bedroom. Sex robots have quietly been a topic of conversation for a number of years, but with the rise of artificial intelligence they appear to have found their way into the mainstream - or at least the edges of the mainstream. There’s potentially some squeamishness when thinking about sex robots, but, in fact, if we want to think seriously about the consequences of artificial intelligence - from how it is built to how it impacts the way we interact with each other and other things - sex robots are a great place to begin. Read next: Introducing Deon, a tool for data scientists to add an ethics checklist Sexualizing artificial intelligence It’s easy to get caught up in the image of a sex doll, plastic skinned, impossible breasts and empty eyes, sad and uncanny, but sexualized artificial intelligence can come in many other forms too. Let’s start with sex chatbots. These are, fundamentally, a robotic intelligence that is able to respond to and stimulate a human’s desires. But what’s significant is that they treat the data of sex and sexuality as primarily linguistic - the language people use to describe themselves, their wants, their needs their feelings. The movie Her is a great example of a sexualised chatbot. Of course, the digital assistant doesn’t begin sexualised, but Joaquin Phoenix ends up falling in love with his female-voiced digital assistant through conversation and intimate interaction. The physical aspect of sex is something that only comes later. https://youtu.be/3n5muEWaE_Q Ai Furuse - the Japanese sex chatbot But they exist in real life too. The best example out these is Ai Furuse, a virtual girlfriend that interacts with you in an almost human-like manner. Ai Furuse is programmed with a dictionary of more than 30,000 words, and is able to respond to conversational cues. But more importantly, AI Furuse is able to learn from conversations. She can gather information about her interlocutor and, apparently, even identify changes in their mood. The more you converse with the chatbot, the more intimate and closer your relationship should be (in theory). Immediately, we can begin to see some big engineering questions. These are primarily about design, but remember - wherever you begin to think about design we’re starting to move towards the domain of ethics as well. The very process of learning through interaction requires the AI to be programmed in certain ways. It's a big challenge for engineers to determine what’s really important in these interactions. The need to make judgements on how users behave. The information that’s passed to the chatbot needs to be codified and presented in a way that can be understood and processed. That requires some work in itself. The models of desire on which Ai Furuse are necessarily limited. They bear the marks of the engineers that helped to create 'her'. It becomes a question of ethics once we start to ask if these models might be normative in some way. Do they limit or encourage certain ways of interacting? Desire algorithms In the context of one chatbot that might not seem like a big deal. But if (or as) the trend moves into the mainstream, we start to enter a world where the very fact of engineering chatbots inadvertently engineers the desires and sexualities that are expressed towards them. In this instance, not only do we shape the algorithms (which is what’s meant to happen), we also allow these ‘desire algorithms’ to shape our desires and wants too. Storing sexuality on the cloud But there’s another more practical issue as well. If the data on which sex chatbots or virtual lovers runs on the cloud, we’re in a situation where the most private aspects of our lives are stored somewhere that could easily be accessed by malicious actors. This a real risk of Ai Furuse, where cloud space is required for your ‘virtual girlfriend’ to ‘evolve’ further. You pay for additional cloud space. It’s not hard to see how this could become a problem in the future. Thousands of sexual and romantic conversations could be easily harvested for nefarious purposes. Sex robots, artificial intelligence and the problem of consent Language, then, is the kernel of sexualised artificial intelligence. Algorithms, when made well, should respond, process, adapt to and then stimulate further desire. But that's only half the picture. The physical reality of sex robots - both as literal objects, but also the physical effects of what they do - only adds a further complication into the mix. Questions about what desire is - why we have it, what we should do with it - are at the forefront of this debate. If, for example, a paedophile can use a child-like sex robot as a surrogate object of his desires, is that, in fact, an ethical use of artificial intelligence? Here the debate isn’t just about the algorithm, but how it should be deployed. Is the algorithm performing a therapeutic purpose, or is it actually encouraging a form of sexuality that fails to understand the concept of harm and consent? This is an important question in the context of sex robots, but it’s also an important question for the broader ethics of AI. If we can build an AI that is able to do something (ie. automate billions of jobs) should we do it? Who’s responsibility is it to deal with the consequences? The campaign against sex robots These are some of the considerations that inform the perspective of the Campaign Against Sex Robots. On their website, they write: "Over the last decades, an increasing effort from both academia and industry has gone into the development of sex robots – that is, machines in the form of women or children for use as sex objects, substitutes for human partners or prostituted persons. The Campaign Against Sex Robots highlights that these kinds of robots are potentially harmful and will contribute to inequalities in society. We believe that an organized approach against the development of sex robots is necessary in response the numerous articles and campaigns that now promote their development without critically examining their potentially detrimental effect on society." For the campaign, sex robots pose a risk in that they perpetuate already existing inequalities and forms of exploitation in society. They prevent us from facing up to these inequalities. They argue that it will “reduce human empathy that can only be developed by an experience of mutual relationship.” Consent and context Consent is the crucial problem when it comes to artificial intelligence. And you could say that it points to one of the limitations of artificial intelligence that we often miss - context. Algorithms can’t ever properly understand context. There will, undoubtedly be people who disagree with this. Algorithms can, for example, understand the context of certain words and sentences, right? Well yes, that may be true, but that’s not strictly understanding context. Artificial intelligence algorithms are set a context, one from which they cannot deviate. They can’t, for example, decide that actually encouraging a pedophile to act out their fantasies is wrong. It is programmed to do just that. But the problem isn’t simply with robot consent. There’s also an issue with how we consent to an algorithm in this scenario. As journalist Adam Rogers writes in this article for Wired, published at the start of 2018: "It’s hard to consent if you don’t know to whom or what you’re consenting. The corporation? The other people on the network? The programmer?" Rogers doesn’t go into detail on this insight, but it gets to the crux of the matter when discussing artificial intelligence and sex robots. If sex is typically built on a relationship between people, with established forms of communication that establish both consent and desire, what happens when this becomes literally codified? What happens when these additional layers of engineering and commerce get added on top of basic sexual interaction? Is the problem that we want artificial intelligence to be human? Towards the end of the same piece, Rogers finds a possible solutions from privacy researcher Sarah Jamie Lewis. Lewis wonders whether one of the main problems with sex robots is this need to think in humanoid terms. “We’re already in the realm of devices that look like alien tech. I looked at all the vibrators I own. They’re bright colors. None of them look like a penis that you’d associate with a human. They’re curves and soft shapes.” Of course, this isn’t an immediate solution - sex robots are meant to stimulate sex in its traditional (arguably heteronormative) sense. What Lewis suggests, and Rogers seems to agree with, is really just AI-assisted masturbation. But their insight is still useful. On reflection, there is a very real and urgent question about the way in which we deploy artificial intelligence. We need to think carefully about what we want it to replicate and what we want it to encourage. Sex robots are the starting point for thinking seriously about artificial intelligence It’s worth noting that when discussing algorithms we end up looping back onto ourselves. Sex robots, algorithms, artificial intelligence - they’re a problem insofar as they pose questions about what we really value as humans. They make us ask what we want to do with our time, and how we want to interact with other people. This is perhaps a way forward for anyone that builds or interacts with algorithms. Whether they help you get off, or find your next purchase. Consider what you’re algorithm is doing - what’s it encouraging, storing , processing, substituting. We can’t prepare for a future with artificial intelligence without seriously considering these things.
Read more
  • 0
  • 0
  • 32109
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-has-ethical-hacking-benefited-the-software-industry
Fatema Patrawala
27 Sep 2019
8 min read
Save for later

How has ethical hacking benefited the software industry

Fatema Patrawala
27 Sep 2019
8 min read
In an online world infested with hackers, we need more ethical hackers. But all around the world, hackers have long been portrayed by the media and pop culture as the bad guys. Society is taught to see them as cyber-criminals and outliers who seek to destroy systems, steal data, and take down anything that gets in their way. There is no shortage of news, stories, movies, and television shows that outright villainize the hacker. From the 1995 movie Hackers, to the more recent Blackhat, hackers are often portrayed as outsiders who use their computer skills to inflict harm and commit crime. Read this: Did you know hackers could hijack aeroplane systems by spoofing radio signals? While there have been real-world, damaging events created by cyber-criminals that serve as the inspiration for this negative messaging, it is important to understand that this is only one side of the story. The truth is that while there are plenty of criminals with top-notch hacking and coding skills, there is also a growing and largely overlooked community of ethical (commonly known as white-hat) hackers who work endlessly to help make the online world a better and safer place. To put it lightly, these folks use their cyber superpowers for good, not evil. For example, Linus Torvalds, the creator of Linux was a hacker, as was Tim Berners-Lee, the man behind the World Wide Web. The list is long for the same reason the list of hackers turned coders is long – they all saw better ways of doing things. What is ethical hacking? According to the EC-Council, an ethical hacker is “an individual who is usually employed with an organization and who can be trusted to undertake an attempt to penetrate networks and/or computer systems using the same methods and techniques as a malicious hacker.” Listen: We discuss what it means to be a hacker with Adrian Pruteanu [Podcast] The role of an ethical hacker is important since the bad guys will always be there, trying to find cracks, backdoors, and other secret ways to access data they shouldn’t. Ethical hackers not only help expose flaws in systems, but they assist in repairing them before criminals even have a shot at exploiting said vulnerabilities. They are an essential part of the cybersecurity ecosystem and can often unearth serious unknown vulnerabilities in systems better than any security solution ever could. Certified ethical hackers make an average annual income of $99,000, according to Indeed.com. The average starting salary for a certified ethical hacker is $95,000, according to EC-Council senior director Steven Graham. Ways ethical hacking benefits the software industry Nowadays, ethical hacking has become increasingly mainstream and multinational tech giants like Google, Facebook, Microsoft, Mozilla, IBM, etc employ hackers or teams of hackers in order to keep their systems secure. And as a result of the success hackers have shown at discovering critical vulnerabilities, in the last year itself there has been a 26% increase in organizations running bug bounty programs, where they bolster their security defenses with hackers. Other than this there are a number of benefits that ethical hacking has provided to organizations majorly in the software industry. Carry out adequate preventive measures to avoid systems security breach An ethical hacker takes preventive measures to avoid security breaches, for example, they use port scanning tools like Nmap or Nessus to scan one’s own systems and find open ports. The vulnerabilities with each of the ports is studied, and remedial measures are taken by them. An ethical hacker will examine patch installations and make sure that they cannot be exploited. They also engage in social engineering concepts like dumpster diving—rummaging through trash bins for passwords, charts, sticky notes, or anything with crucial information that can be used to generate an attack. They also attempt to evade IDS (Intrusion Detection Systems), IPS (Intrusion Prevention systems), honeypots, and firewalls. They carry out actions like bypassing and cracking wireless encryption, and hijacking web servers and web applications. Perform penetration tests on networks at regular intervals One of the best ways to prevent illegal hacking is to test the network for weak links on a regular basis. Ethical hackers help clean and update systems by discovering new vulnerabilities on an on-going basis. Going a step ahead, ethical hackers also explore the scope of damage that can occur due to the identified vulnerability. This particular process is known as pen testing, which is used to identify network vulnerabilities that an attacker can target. There are many methods of pen testing. The organization may use different methods depending on its requirements. Any of the below pen testing methods can be carried out by an ethical hacker: Targeted testing which involves the organization's people and the hacker. The organization staff will be aware of the hacking being performed. External testing penetrates all externally exposed systems such as web servers and DNS. Internal testing uncovers vulnerabilities open to internal users with access privileges. Blind testing simulates real attacks from hackers. Testers are given limited information about the target, which requires them to perform reconnaissance prior to the attack. Pen testing is the strongest case for hiring ethical hackers. Ethical hackers have built computers and programs for software industry Going back to the early days of the personal computer, many of the members in the Silicon Valley would have been considered hackers in modern terms, that they pulled things apart and put them back together in new and interesting ways. This desire to explore systems and networks to find how it worked made many of the proto-hackers more knowledgeable about the different technologies and it can be safeguarded from malicious attacks. Just as many of the early computer enthusiasts turned out to be great at designing new computers and programs, many people who identify themselves as hackers are also amazing programmers. This trend of the hacker as the innovator has continued with the open-source software movement. Much of the open-source code is produced, tested and improved by hackers – usually during collaborative computer programming events, which are affectionately referred to as "hackathons." Even if you never touch a piece of open-source software, you still benefit from the elegant solutions that hackers come up with that inspire or are outright copied by proprietary software companies. Ethical hackers help safeguard customer information by preventing data breaches The personal information of consumers is the new oil of the digital world. Everything runs on data. But while businesses that collect and process consumer data have become increasingly valuable and powerful, recent events prove that even the world’s biggest brands are vulnerable when they violate their customers’ trust. Hence, it is of utmost importance for software businesses to gain the trust of customers by ensuring the security of their data. With high-profile data breaches seemingly in the news every day, “protecting businesses from hackers” has traditionally dominated the data privacy conversation. Read this: StockX confirms a data breach impacting 6.8 million customers In such a scenario, ethical hackers will prepare you for the worst, they will work in conjunction with the IT-response plan to ensure data security and in patching breaches when they do happen. Otherwise, you risk a disjointed, inconsistent and delayed response to issues or crises. It is also imperative to align how your organization will communicate with stakeholders. This will reduce the need for real-time decision-making in an actual crisis, as well as help limit inappropriate responses. They may also help in running a cybersecurity crisis simulation to identify flaws and gaps in your process, and better prepare your teams for such a pressure-cooker situation when it hits. Information security plan to create security awareness at all levels No matter how large or small your company is, you need to have a plan to ensure the security of your information assets. Such a plan is called a security program which is framed by information security professionals. Primarily the IT security team devises the security program but if done in coordination with the ethical hackers, they can provide the framework for keeping the company at a desired security level. Additionally by assessing the risks the company faces, they can decide how to mitigate them, and plan for how to keep the program and security practices up to date. To summarize… Many white hat hackers, gray hat and reformed black hat hackers have made significant contributions to the advancement of technology and the internet. In truth, hackers are almost in the same situation as motorcycle enthusiasts in that the existence of a few motorcycle gangs with real criminal operations tarnishes the image of the entire subculture. You don’t need to go out and hug the next hacker you meet, but it might be worth remembering that the word hacker doesn’t equal criminal, at least not all the time. Our online ecosystem is made safer, better and more robust by ethical hackers. As Keren Elazari, an ethical hacker herself, put it: “We need hackers, and in fact, they just might be the immune system for the information age. Sometimes they make us sick, but they also find those hidden threats in our world, and they make us fix it.” 3 cybersecurity lessons for e-commerce website administrators Hackers steal bitcoins worth $41M from Binance exchange in a single go! A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes
Read more
  • 0
  • 0
  • 31858

article-image-how-to-build-a-location-based-augmented-reality-app
Guest Contributor
22 Nov 2018
7 min read
Save for later

How to build a location-based augmented reality app

Guest Contributor
22 Nov 2018
7 min read
The augmented reality market is developing rapidly. Today, it has a total market value of almost $15 billion; according to Statista,  and this figure could rise to $210 billion by 2022. Augmented reality is having a huge impact on the games industry, but it’s being used by organizations in fields as diverse as publishing and retail.. For example, Layar is an app that turns static objects into live objects, while IKEA’s Catalog app lets you imagine how different types of furniture might fit into your room. But it’s not just about commerce: some apps have a distinctly educational bent, like Field Trip. Field Trip uses augmented reality to help users learn about the history that immediately surrounds them. The best augmented reality apps are always deceptively simple. But to build a really effective augmented reality application you need a diverse range of skills, that span both the domains of software and real-world physics. Let’s take a closer look at location-based augmented reality apps, including what they’re used for and how you can begin building them. How does location-based AR app work? Location-based augmented reality apps are sometimes called geo-based AR apps. Whatever you call them, one thing is important: they collate GPS mobile data and the digital compass to detect the location and position of the device. The application works like this: The AR app arranges queries to be dispatched to the sensor. Once the data has been acquired, the app can determine where it should add virtual information (such as images) should be added to the real world. Location-based augmented reality apps can be used both inside or outside. When inside and it isn’t possible to connect to GPS, the application will use beacons for location data. The best examples of existing location-based augmented reality apps While reading about location-based augmented reality apps can give you a good idea of how they work, to be really inspired, you need to try some out for yourself. Here’s a list of some of the best location-based augmented reality apps out there. Yelp Monocle Yelp Monocle helps you navigate an unknown city. Using GPS, it provides exactly the sort of information you’d expect from Yelp, but in a format that’s fully integrated with your surroundings. So, you can see restaurant reviews, shop opening hours as you move around your environment. Ingress Ingress is an augmented reality gaming app that immerses you in a (semi) virtual world. Your main mission is to find portals that the game ‘creates’ in your immediate environment and open them. Essentially, the game is a great way to explore the world around you and places a new augmented layer on a place that might otherwise be familiar. Vortex Planetarium Vortex Planetarium is an app for aspiring astronomers or anybody else with a passing interest in astronomy. The app detects the user’s location and then provides them with celestial data to better understand the night sky. Steps to create location-based AR app So, if you like the idea of a location-based augmented reality app, you’ll probably want to get started. As we’ve seen, these apps can be incredibly complex, but if you break the development process down, it should become much easier. 1. Determine what resources you need Depending on the complexity of your app, you need to determine what resources are needed - that could be anything from data to other frameworks and services will be required. For example, if you plan to create a game with 3D objects, you’ll need to use Unity to build in that level of functionality and realism. 2. Choose the right augmented reality tool There are a huge number of available augmented reality software development kits out there. However, rather than wade through every single one, here are some of the best to get started with. R SDKs, but we will list the most popular ones that can give you the widest range of possible features. AR Kit by Apple AR kit from Apple features just about everything you’d need to develop an augmented reality application, For example, it has a technology that allows combines both computer vision and camera data to track the user’s environment. AR Kit also is able to adjust the light level in the virtual model, to respond to the level of light in the real world. ARKit 2 recently brought users a number of cool new features. For example, it allows you to build interactivity into your application, and also allows you to build ‘memory’ into your app so it can ‘remember’ the location of augmented reality objects.ARCore by Google In Google’s ARCore you’ll find a mapping tool which is particularly useful for developing of location-based AR apps. ARCore can also track motion and detect vertical and horizontal surfaces. In the latest version of ARCore users can take two gadgets and work with one AR object from different viewing angles. 3. Geolocation data should be added Not all SDKs provide mapping feature. If it doesn’t, it’s essential to make sure you add in geolocation data. Without it, the app wouldn’t work! As we’ve already seen, GPS technology is typically used. It’s convenient and it can detect a user's location anywhere. It can, however, consume a lot of energy. Location services on iOS and Android will help to activate geolocation on the device. 3 augmented reality pitfalls to avoid Developing something as complex as a location-based augmented reality app is bound to lead to some challenges. So be prepared - watch for some of these pitfalls.. Ensure you have proper functionality. When users move with their camera and look for AR objects, these objects should remain static, regardless of the user’s movements. To do this, use SLAM - Simultaneous Localization and Mapping. This is a technique that allows software systems - like robots - ‘understand’ where they are situated in relation to their surroundings. Accuracy. A crucial factor for any AR app is accuracy. When developing your app, it’s essential to consider the user’s position to ensure that the app sends queries to sensors correctly. If it doesn’t the whole experience could seem plain weird for the user. Similarly, the distance between the device and the real world must be calculated correctly - again, if it isn’t your application simply will not work. Get started - build an awesome augmented reality app! Clearly, building a location-based augmented reality app isn’t easy. It requires skill and a commitment to keep going in the face of challenges. You certainly need a team of great developers around you if you’re going to deliver something that makes an impact. But, really, that’s what makes software development exciting, right? Author Bio Vitaly Kuprenko is a technical writer at Cleveroad. It's a web and mobile app development company in Ukraine. He enjoys telling about tech innovations and digital ways to boost businesses. Magic Leap unveils Mica, a human-like AI in augmented reality. Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality “As Artists we should be constantly evolving our technical skills and thought processes to push the boundaries on what’s achievable,” Marco Matic Ryan, Augmented Reality Artist
Read more
  • 0
  • 0
  • 31856

article-image-how-ai-is-transforming-the-manufacturing-industry
Kunal Parikh
15 Dec 2017
7 min read
Save for later

How AI is transforming the manufacturing Industry

Kunal Parikh
15 Dec 2017
7 min read
After more than five decades of incremental innovation, the manufacturing sector is finally ready to pivot thanks to Industry 4.0. Self-consciousness of technology plays a key role in ushering in Industry 4.0. AI in the manufacturing sector aims to achieve just that i.e. the creation of systems that can perceive their environment and take action consequently. One of the prominent minds in AI, Andrew Ng believes Factories to be AI's next frontier! Andrew is on a mission to AI-ify manufacturing with its new start-up called Landing.AI. For this initiative he partnered with Foxconn, world's largest contract manufacturer and makers of Apple iPhones. Together they aim to develop a wide range of AI transformation programs, from introduction of new technologies, operational processes, automated quality control and much more. In this AI-powered industrial revolution, machines are becoming smarter and interconnected. Manufacturers are using embedded intelligence of machines for collecting and analyzing data to generate meaningful insights. These are then used to run equipment efficiently, optimize workflows of operations and supply chains, among other things. Thus, AI is leaving an indelible impact across the manufacturing cycle. Further, a new wave of automation is transforming the role of human workforce wherein AI-driven robots are empowering production 24-hours a day. This is helping industrial environments gear up for the shift towards the smart factory environment. Below are some ways AI is revolutionizing manufacturing. Predictive analytics for increased production output Smart manufacturing systems are leveraging the power of predictive analysis and machine learning algorithms to enhance the production capacity. Predictive analytics derives its power from the data collected from the devices or sensors embedded in a manufacturer’s industrial equipments. These sensors become part of the IoT (Internet of Things) which collects and shares data with data scientists on the cloud. This setup is helping manufacturing industries to move from repair-and-replace to a predict-and-fix maintenance model. They do this by enabling these businesses to retrieve the right information at the right time to make the right decisions. For instance, in a pump manufacturing company, data scientists could collect, store and analyze sensor data based on machine attributes like heat, vibration, noise etc. This data can be stored in the cloud allowing for an array of analyses to be performed from understanding machine performance to predicting and monitoring disruption in processes and equipment remotely. Further, syncing up production schedules with parts availability can ensure enhanced production output. Enhancing product and service quality with machine and deep learning algorithms Manufacturers can deploy supervised/unsupervised ML, DL and reinforcement learning algorithms to monitor quality issues in the manufacturing process. For instance, researchers at Lappeenranta University of Technology in Finland have developed an innovative welding system for high-strength steel. They used unsupervised learning to allow the system to learn to mimic human’s ability to self-explore and self-correct. This welding system detects imperfections and self-corrects during the welding process using a new kind of sensor system controlled by a neural network program. Further, it also calculates other faults that may arise during the entire process. Visual inspection technology in an industrial environment identifies both functional and cosmetic defects. IBM has developed a new offering for manufacturing clients to automate visual quality inspections. Rooted in deep learning, a centralized ‘learning service’ collects images of all products - normal and abnormal. Next, it builds analytical models to recognize and classify different characteristics of machine parts and components into OK or NG. Characteristics that meet quality specifications are considered as OK while those that don't are classified as ‘NG’. Predictive maintenance for enhanced MRO (Maintenance, Repair and Overhaul) performance Manufacturing industries strive to achieve excellence throughout the production process. To ensure this, machinery embedded with sensors generate real-time performance and workload data. This helps in diagnosing faults and in predicting the need for equipment maintenance. For instance, a machine may break down due to lack of maintenance in the long run, incurring losses to business. With predictive maintenance, businesses can be better equipped to handle equipment malfunction by identifying significant causal factors like weather, temperature etc. Targeted predictive maintenance generates critical information such as which machine parts will need replacing and when. This helps in reducing equipment downtime, lowering maintenance costs and pre-emptively addressing aging equipment. Reinforcement Learning for managing warehouses Large warehouses face challenging times in streamlining space, managing inventories and reducing transit time. Manufacturing industries are employing reinforcement learning for efficient warehouse management. RL approach uses trial and error iterations within an environment to achieve a particular goal. Imagine what a breeze warehousing could be and the associated cost savings, if robots could pick up the right products from various lots and move them to the right destinations with great precision. Here, reinforcement learning based algorithms can improve the efficiency of such intelligent warehouses with multirobot systems by addressing task scheduling and path planning issues. Fanuc, a Tokyo based company, employs robots having reinforced learning ability to perform such tasks with great agility and precision. AI in supply chain management AI is helping manufacturers gain an in-depth understanding of the complex variables at play in the supply chain and in predicting future scenarios. To enable seamless insights generation, businesses are opting for more flexible and efficient cyber-physical systems. These intelligent systems are self-configurative and self-optimizing structures that can predict problems and minimize losses. Thus they help businesses to innovate rapidly by reducing the time to market, foresee uncertainties and deal with them promptly. Siemens, for example, is creating a self-organizing factory that aims to automate the entire supply chain by generating work orders using the demand and order information. Implication of AI in Industry 4.0 Industry 4.0 is the new way of manufacturing using automation, devices connected on the IoT, cloud and cognitive computing. It propagates the concept of the “smart factory” in which cyber-physical systems observe the physical processes of the factory and make discrete decisions accordingly. As AI finds its application in Industry 4.0, computers will merge together with robotics to automate and maximize the efficiency of the industrial processes. Powered by machine learning algorithms, the computer systems could control the robots with minimum human intervention. For instance, in a manufacturing setup, AI can work alongside systems like SCADA - to control industrial processes in an efficient manner. These systems can monitor, collect and process real-time data by directly interacting with devices such as sensors, pumps, motors etc. through human-machine interface (HMI) software. These machine-to-machine communication systems give new direction to the human-machine collaboration potential thus changing the way we see workforce management. Industry 4.0 will favor those who can build software, hardware, and firmware - those who can adapt and maintain new equipment and those who can design automation and robotics. Within Industry 4.0, augmented reality and virtual reality are other cutting edge production ready technologies that are making the idea of a smart factory a reality. The recent relaunch of Google Glass especially designed for the factory floor is worth a mention here. The Wi-Fi-enabled glasses allow factory workers, mechanics, and other technicians to view instructional videos, manuals, training videos etc., all in their line of sight. This helps in maintaining higher standards of work while ensuring safety with agility. In Conclusion Manufacturing industries are gearing themselves to harness AI along with IoT, AR/VR to create an agile manufacturing environment and to make smarter and real-time decisions. AI is helping realize the full potential of Industrial Internet of Things (IIoT) by applying machine learning, deep learning and other evolutionary algorithms to the sensor data. Human-machine collaboration is transforming the scenario at the fulfillment centers creating a win-win situation for both humans and robots. Robots employed at the fulfillment centers having motion sensors move on to the field of QR codes with precision and agility withouting running into each other creating a fascinating view. Imagine a real-life JARVIS from the movie Iron Man managing entire supply chains or factory spaces. The day is not far away when we can see a JARVIS like advanced virtual assistant uses sensors to collect real-time data, AI to process data,and blockchain to securely transmit the information while using AR to interact with us visually. It could take care of system and mechanical failures remotely while ceasing control of the factory for efficient energy management. Manufacturers could go save the world or unveil new products, Iron Man style!
Read more
  • 0
  • 0
  • 31771

article-image-what-kotlin
Hari Vignesh
09 Oct 2017
5 min read
Save for later

What is Kotlin?

Hari Vignesh
09 Oct 2017
5 min read
Kotlin is a statically typed programming language for the JVM, Android, and the browser. Kotlin is a new programming language from JetBrains, the maker of the world’s best IDEs. Also, it’s now the official language for Android app development. Why Kotlin ? Before we begin highlighting the brilliant features of Kotlin, we need to understand how Kotlin originated and evolved. We already have many programming languages. How did Kotlin emerge to capture programmers' hearts? A 2013 study showed that language features matter little compared to ecosystem issues, when developers evaluate programming languages. Kotlin compiles to JVM bytecode or JavaScript. It is not a language you will write a kernel in. It is of greatest interest to people who work with Java today, although it could appeal to all programmers who use a garbage collected runtime, including people who currently use Scala, Go, Python, Ruby, and JavaScript. Kotlin comes from industry, not academia. It solves problems faced by working programmers and developers today. As an example, the type system helps you avoid null pointer exceptions. Research languages tend to just not have null at all, but this is of no use to people working with large codebases and APIs which do. Kotlin costs nothing to adopt! It’s open source, but that’s not the point. It means that there’s a high quality, one-click Java to Kotlin converter tool (available in Android Studio), and a strong focus on Java binary compatibility. You can convert an existing Java project one file at a time and everything will still compile, even for complex programs that run to millions of lines of code. Kotlin programs can use all existing Java frameworks and libraries, even advanced frameworks that rely on annotation processing. The interop is seamless and does not require wrappers or adapter layers. It integrates with Maven, Gradle, and other build systems. It is approachable and it can be learned in a few hours by simply reading the language reference. The syntax is clean and intuitive. Kotlin looks a lot like Scala, but it’s simpler. The language balances terseness and readability as well. It enforces no particular philosophy of programming, such as overly functional or OOP styling. Kotlin Features Let me summarize why it’s the right time to jump from native Java to Kotlin Java. Concise: Drastically reduce the amount of boilerplate code you need to write. Safe: Avoid entire classes of errors such as null pointer exceptions. Versatile: Build server-side applications, Android apps, or front-end code running in the browser. Interoperable: Leverage existing frameworks and libraries of the JVM with 100 percent Java Interoperability. Brief discussion Let’s discuss a few important features in detail. Functional Programming Support Functional programming is not easy, at least in the beginning. That is, until it becomes fun. With zero-overhead lambdas and ability to do mapping, folding, etc. over standard Java collections. The Kotlin type system distinguishes between mutable and immutable views over collections. Function purity The concept of a pure function (a function that does not have side effects) is the most important functional concept, which allows us to greatly reduce code complexity and get rid of most mutable states. 2. Higher-order functions Higher-order functions either take functions as parameters, return functions, or both. Higher-order functions are everywhere. You just pass functions to collections to make code easy to read. titles.map { it.toUpperCase()} reads like plain English. Isn’t it beautiful? 3. Immutability Immutability makes it easier to write, use, and reason about the code (class invariant is established once and then unchanged). The internal state of your app components will be more consistent. Kotlin enforces immutability by introducing val keyword as well as Kotlin collections, which are immutable by default. Once the val or a collection is initialized, you can be sure about its validity. Null Safety Kotlin’s type system is aimed at eliminating the danger of null references from code, also known as ‘The Billion Dollar Mistake.’ One of the most common pitfalls in many programming languages, including Java, is that of accessing a member of null references, resulting in null reference exceptions. In Java, this would be the equivalent of a NullPointerException, or NPE for short. In Kotlin, the type system distinguishes between references that can hold null (nullable references) and those that cannot (non-null references). For example, a regular variable of type String can’t hold null. How to migrate effectively to Kotlin? Migration is one of the last things that every developer or the organization wants. There are a lot of advantages when you migrate from Java to Kotlin, but the bottom line is, it will make the job of the developer easy, which in turn reduces bugs and improves the code quality and so on. Migrating effectively will always have many routes. But my advice would be to first convince the management that you need to migrate (if you’re a developer). Then you need to start writing the test cases first, to get familiar with the language. Then, as Kotlin is of interoperable capacity, you can start changing one file/module at a time. About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 31713
article-image-hybrid-mobile-apps-what-you-need-to-know
Sugandha Lahoti
26 Apr 2018
4 min read
Save for later

Hybrid Mobile apps: What you need to know

Sugandha Lahoti
26 Apr 2018
4 min read
Hybrid mobile apps have been around for quite some time now, but advances in mobile development software and changes in user behavior have allowed it to grow. Today, users expect hybrid apps, even if they wouldn’t know what a ‘hybrid app’ actually is. What is a Hybrid mobile app? A Hybrid app is essentially a web application that acts like a native app. Or a native app that acts like a web application. That means it can do everything HTML5 does while also incorporating native app features, like access to a phone’s camera. Hybrid mobile apps consist of two parts. The first is the back-end code built using languages such as HTML, CSS, and Javascript. The second is a native shell that loads the code using Webview. Advantages of hybrid mobile apps Hybrid apps are much easier to build than native apps. This is because they are built using HTML, CSS, and Javascript - software that typically runs in the browser. They also have a faster development cycle than native apps because you only have a JavaScript codebase. It is, however, important to note that hybrid mobile apps require third-party tools such as Apache Cordova to ease communication between the web view and the native platform. Noteworthy Hybrid apps include MarketWatch, Untappd, Sworkit etc. Hybrid mobile apps can run on both Android and iOS devices (the two most prominent OS). This is great for developers as it means less work for them - code can be reused for progressive web applications and desktop applications with minor tweaking. Disadvantages of hybrid mobile apps Although they’re extremely versatile, hybrid apps have certain disadvantages. They’re often a little more expensive than standard web apps because you have to work with the native wrapper. It’s also sometimes a disadvantage to be dependent on a third-party platform. Compared to native apps, hybrid apps aren’t quite as interactive and often a bit slower. Of course, the app is dependent on resources from the web. Hybrid mobile apps also generally have a standard template. Any customization you want to do in your application will take you away from the hybrid model. If this is the case, you may as well go native. Hybrid mobile app frameworks There are a good range of hybrid mobile application frameworks out there for mobile developers at the moment. Let’s take a look at some of the best. React Native Facebook’s React Native is a mobile framework for implementing a single code multiple times. It compiles to native mobile app components to build native mobile applications (iOS, Android, and Windows) in JavaScript. React Native’s library includes Flexbox CSS styling, inline styling, debugging, and supports deploying to either the App Store or Google Play. Ionic Ionic Framework is an open-source SDK for hybrid mobile app development, licensed under MIT. It is built on top of Angular.js and Apache Cordova.  Ionic provides tools and services for developing hybrid mobile apps using Web technologies like CSS, HTML5, and Sass. Apps build using Ionic can be distributed through native app stores to be installed on devices by using Cordova. Xamarin Microsoft’s Xamarin Hybrid development platform allows developers to code in C# many platforms in C#. Developers can use Xamarin tools to write native Android, iOS, and Windows apps with a C#-shared codebase, and share code across multiple platforms. PhoneGap Adobe PhoneGap framework is an open source distribution of Apache Cordova framework. With PhoneGap, hybrid applications are built with HTML5 and CSS3 (for rendering), and JavaScript (for logic) to be used across multiple platforms. Hybrid mobile apps are great for users Hybrid mobile apps are particularly effective when you want to build and deploy an app more efficiently. They are also useful for building prototype applications. However, the key thing to remember about hybrid mobile apps is that many users today expect the type of experience they deliver. The old distinction between browser and native experiences has almost disappeared. A well-written hybrid app does not behave or look any different than its native equivalent and that, really, is what users want. Also, check out React Native Cookbook React and React Native Learning Ionic - Second Edition Ionic 2 Cookbook - Second Edition Mastering Xamarin UI Development
Read more
  • 0
  • 0
  • 31481

article-image-polycloud-a-better-alternative-to-cloud-agnosticism
Richard Gall
16 May 2018
3 min read
Save for later

Polycloud: a better alternative to cloud agnosticism

Richard Gall
16 May 2018
3 min read
What is polycloud? Polycloud is an emerging cloud strategy that is starting to take hold across a range of organizations. The concept is actually pretty simple: instead of using a single cloud vendor, you use multiple vendors. By doing this, you can develop a customized cloud solution that is suited to your needs. For example, you might use AWS for the bulk of your services and infrastructure, but decide to use Google's cloud for its machine learning capabilities. Polycloud has emerged because of the intensely competitive nature of the cloud space today. All three major vendors - AWS, Azure, and Google Cloud - don't particularly differentiate their products. The core features are pretty much the same across the market. Of course, there are certain subtle differences between each solution, as the example above demonstrates; taking a polycloud approach means you can leverage these differences rather than compromising with your vendor of choice. What's the difference between a polycloud approach and a cloud agnostic approach? You might be thinking that polycloud sounds like cloud agnosticism. And while there are clearly many similarities, the differences between the two are very important. Cloud agnosticism aims for a certain degree of portability across different cloud solutions. This can, of course, be extremely expensive. It also adds a lot of complexity, especially in how you orchestrate deployments across different cloud providers. True, there are times when cloud agnosticism might work for you; if you're not using the services being provided to you, then yes, cloud agnosticism might be the way to go. However, in many (possibly most) cases, cloud agnosticism makes life harder. Polycloud makes it a hell of a lot easier. In fact, it ultimately does what many organizations have been trying to do with a cloud agnostic strategy. It takes the parts you want from each solution and builds it around what you need. Perhaps one of the key benefits of a polycloud approach is that it gives more power back to users. Your strategic thinking is no longer limited to what AWS, Azure or Google offers - you can instead start with your needs and build the solution around that. How quickly is polycloud being adopted? Polycloud first featured in Thoughtworks' Radar in November 2017. At that point it was in the 'assess' stage of Thoughtworks' cycle; this means it's simply worth exploring and investigating in more detail. However, in its May 2018 Radar report, polycloud had moved into the 'trial' phase. This means it is seen as being an approach worth adopting. It will be worth watching the polycloud trend closely over the next few months to see how it evolves. There's a good chance that we'll see it come to replace cloud agnosticism. Equally, it's likely to impact the way AWS, Azure and Google respond. In many ways, the trend a reaction to the way the market has evolved; it may force the big players in the market to evolve what they offer to customers and clients. Read next Serverless computing wars: AWS Lambdas vs Azure Functions How to run Lambda functions on AWS Greengrass
Read more
  • 0
  • 0
  • 31390

article-image-a-non-programmers-guide-to-learning-machine-learning
Natasha Mathur
05 Sep 2018
11 min read
Save for later

A non programmer’s guide to learning Machine learning

Natasha Mathur
05 Sep 2018
11 min read
Artificial intelligence might seem intimidating, but it isn’t actually as complex as you might think. Many of the tools that have been developed over the last decade or so have all helped to make artificial intelligence and machine learning more accessible to engineers with varying degrees of experience and knowledge. Today, we’ve got to a stage where it’s now accessible even to people who have barely written a line of code in their life! Pretty exciting, right? But if you’re completely new to the field, it can be challenging to know how to get started - fortunately, we’re about to help you overcome that first hurdle. If you are an AI denier, then be sure to first read ‘why learn Machine Learning as a non-techie’ before you move forward. A strong purpose and belief is the first step to learning anything new. Alright, now here’s how you can get started with artificial intelligence and machine learning techniques quickly. 0. Use a free MLaaS or a no code interactive machine learning tool to experience first hand what is possible with learning machine learning: Some popular examples of no code machine learning as a service option are Microsoft Azure, BigML, Orange, and Amazon ML. Read Q2 under the FAQ section below to know more on this topic. 1. Learn Linear Algebra: Linear Algebra is the elementary unit for ML. It helps you effectively comprehend the theory behind the Machine learning algorithms and how they work. It also improves your math skills such as statistics, programming skills, which are all other skills that helps in ML. Learning Resources: Linear Algebra for Beginners: Open Doors to Great Careers Linear algebra Basics 2. Learn just enough Python or any programming: Now, you can get started with any language of your interest, but we suggest Python as  it’s great for people who are new to programming. It’s easy to learn due to its simple syntax. You’ll be able to quickly implement the ML algorithms. Also,  It has a rich development ecosystem that offers a ton of libraries and frameworks in Machine Learning such as Scikit Learn, Lasagne, Numpy, Scipy, Theano, Tensorflow, etc. Learning Resources: Python Machine Learning Learn Python in 7 Days Python for Beginners 2017 [Video] Learn Python with codecademy Python editor for beginner programmers 3. Learn basic Probability Theory and statistics: A lot of fundamental Statistical and Probability Theories form the basis for ML. You’ve probably already learned Probability and statistics in school, it easy to dive into advanced statistics for ML. Machine learning in its currently widely used form is a way to predict odds and see patterns. Knowing statistics and probability is important as it will help you with better understanding of why any machine learning algorithm works. For example, your grounding in this area, will help to ask the right questions, choose the right set of algorithms and know what to expect as answers from your ML model on questions such as: What are the odds of this person also liking this movie given their current movie watching choices ( Collaborative filtering and content-based filtering) How similar is this user to that group of users who brought a bunch of stuff on my site (clustering, collaborative filtering, and classification) Could this person be at risk of cancer given a certain set of traits and health indicator observations (logistic regression) Should you buy that stock (decision tree) Also, check out our interview with James D. Miller to know more about why learning stats is important in this field. Learning resources: Statistics for Data Science [Video] 4. Learn machine learning algorithms: Do not get intimidated!  You don’t have to be an expert to learn ML algorithms. Knowing basic ML algorithms that are majorly used in the real world applications like linear regression, naive Bayes, and decision trees, are enough to get you started. Learn what they do and how they are used in Machine Learning. 5. Learn numpy sci-kit learn,Keras or any other popular machine learning framework: It can be confusing initially to decide which framework to learn. Each one has its own advantages and disadvantages. Numpy is a linear algebra library which is useful for performing mathematical and logical operations. You can easily work with large multidimensional arrays using Numpy. Sci-kit learn helps with quick implementation of popular algorithms on datasets as just one line of code makes different algorithms available for you. Keras is minimalistic and straightforward with high-levels of extensibility, so it is easier to approach. Learning Resources:  Hands-on Machine Learning with TensorFlow [Video]  Hands-on Scikit-learn for Machine Learning [Video] If you have reached till here, it is time to put your learning into practice. Go ahead and create a simple linear regression model using some publicly available dataset in your area of interest. Kaggle, ourworldindata.org, UC Irvine Machine Learning repository, elitedatascience, all have a rich set of clean datasets in varied fields. Now, it is necessary to commit and put in daily efforts to practise these skills. Quora, Reddit, Medium, and stackoverflow will be your best friends when it comes to solving doubts regarding any of these skills. Data Helpers is another great resource that provides newcomers with help on queries regarding entering the ML field and related topics. Additionally, once you start getting hang of these skills, identify your strengths and interests, to realign your career goals. Research on the kind of work you want to put your newly gained Machine Learning skill to use. It needn’t be professional or serious, it just needs to be something that you deeply care about or are passionate about. This will pull you through your learning milestones, should you feel low at some point. Also, don’t forget to collaborate with other people and learn from them. You can work with web developers, software programmers, data analysts, data administrators, game developers etc. Finally, keep yourself updated with all the latest happenings in the ML world. Follow top experts and influencers on social media, top blogs on Machine Learning, and conferences. Once you are done checking off these steps off your list, you’ll be ready to start off with your ML project.                                                  Now, we’ll be looking at the most frequently asked questions by beginners in the field of Machine learning. Frequently asked questions by Beginners in ML As a beginner, it’s natural to have a lot of questions regarding ML. We’ll be addressing the top three frequently asked questions by beginners or non-programmers when it comes to Machine learning: Q.1 I am looking to make a career in Machine learning but I have no prior programming experience. Do I need to know programming for Machine learning? In a nutshell, Yes. If you want a career in Machine learning then having some form of programming knowledge really helps. As mentioned earlier in this article, learning a programming language can really help you with implementing ML algorithms. It also lets you know the internal mechanism behind Machine learning. So, having programming as a prior skill is great. Again, as mentioned before, you can get started with Python which is the easiest and the most common languages for ML. However, programming is just a part of Machine learning. For instance, “machine learning engineers” typically write more code than develop models, while “research scientists” work more on modelling and analyzing different models. Now, ML is based on the principles of statistical inference and for talking statistically to the computer, we need a language, there comes Coding. So, even though the nature of your job in ML might not require you to code as much, there’s still some amount of coding required. Read Also: Why is Python so good for AI and ML? 5 Python Experts Explain Top languages for Artificial Intelligence development Q.2 Are there any tools that can help me with Machine learning without touching a single line of code? Yes. With the rise of MLaaS (Machine learning as a service), there are certain tools that help you get started with machine learning right-away. These are especially useful for business applications of ML, such as predictive modelling and clustering. Read Also: How MLaaS is transforming cloud Some of the most popular ones are: BigML:  This cloud based web-service lets you upload your data, prepare it and run algorithms on it. It’s great for people with not so extensive data science backgrounds. It offers a clean and easy to use interfaces for configuring algorithms (decision trees) and reviewing the results. Being focused “only” on Machine Learning, it comes with a wide set of features, all well integrated within a usable Web UI. Other than that, it also offers an API so that if you like it you can build an application around it. Microsoft Azure: The Microsoft Azure ML studio is a “GUI-based integrated development environment for constructing and operationalizing Machine Learning workflow on Azure”. So, via an integrated development environment called ML Studio, people without data science background or non-programmers can also build data models with the help of drag-and-drop gestures and simple data flow diagrams. This also saves a lot of time through ML Studio's library of sample experiments. Learning resources: Microsoft Azure Machine Learning Machine Learning In The Cloud With Azure ML[Video] Orange: This is an open source machine learning and data visualization studio for novice and experts alike. It provides a toolbox comprising of text mining (topic modelling) and image recognition. It also offers a design tool for visual programming which allows you to connect together data preparation, algorithms, and result evaluation, thereby, creating machine learning “programs”. Apart from that, it provides over 100 widgets for the environment and there’s also a Python API and library available which you can integrate into your application. Amazon ML: Amazon ML is a part of Amazon Web Services ( AWS ) that combines powerful machine learning algorithms with interactive visual tools to guide you towards easily creating, evaluating, and deploying machine learning models. So, whether you are a data scientist or a newbie, it offers ML services and tools tailored to meet your needs and level of expertise. Building ML models using Amazon ML consists of three operations: data analysis, model training, and evaluation. Learning Resources: Effective Amazon Machine Learning Q.3  Do I need to know advanced mathematics ( college graduate level ) to learn Machine learning? It depends. As mentioned earlier, understanding of the following mathematical topics: Probability, Statistics and Linear Algebra can really make your machine learning journey easier and also help simplify your code. These help you understand the “why” behind the working of the machine learning algorithms, which is quite fundamental to understanding ML. However, not knowing advanced mathematics is not an excuse to not learning Machine Learning. There a lot of libraries which makes the task of applying an ML algorithm to solve a task easier. One such example is the widely used Python’s scikit-learn library. With scikit-learn, you just need one line of code and you’ll have the most common algorithms there for you, ready to be used. But, if you want to go deeper into machine learning then knowing advanced mathematics is a prerequisite as it will help you understand the algorithms, the formulas, how the learning is done and many other Machine Learning concepts. Also, with so many courses and tutorials online, you can always learn advanced mathematics on the side while exploring Machine learning. So, we looked at the three most asked questions by beginners in the field of Machine Learning. In the past, machine learning has provided us with self-driving cars, effective web search, speech recognition, etc. Machine learning is extremely pervasive, in fact, many researchers believe that ML is the best way to make progress towards human-level AI. Learning ML is not an easy task but its not next to impossible either. In the end, it all depends on the amount of dedication and efforts that you’re willing to put in to get a grasp of it. We just touched the tip of the iceberg in this article, there’s a lot more to know in Machine Learning which you will get a hang of as you get your feet dirty in it. That being said, all the best for the road ahead! Facebook launches a 6-part ML video series 7 of the best ML conferences for the rest of 2018 Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 31262
article-image-defensive-strategies-industrial-organizations-can-use-against-cyber-attacks
Guest Contributor
20 Mar 2019
8 min read
Save for later

Defensive Strategies Industrial Organizations Can Use Against Cyber Attacks

Guest Contributor
20 Mar 2019
8 min read
Industrial organizations are prime targets for spies, criminals, hacktivists and even enemy countries. Spies from rival organizations seek ways to access industrial control systems (ICS) so they can steal intelligence and technology and gain a competitive advantage. Criminals look for ways to ransom companies by locking down IT systems. Hacktivists and terrorists are always looking for ways to disrupt and even endanger life through IT and international antagonists might want to hack into a public system (e.g. a power plant) to harm a country's economic performance. This article looks at a number of areas where CTOs need to focus their attention when it comes to securing their organizations from cyber attacks. Third Party Collaboration The Target breach of November 2013 highlighted the risks of poor vendor management policies when it comes to cybersecurity. A third party HVAC (Heating, Ventilation, and Air Conditioning) provider was connected into the retailer's IT architecture in such a way that, when it was hacked, cybercriminals could access and steal credit card details from their customers. Every third party given access to your network–even security vendors–need to be treated as possible accidental or deliberate vectors of attack. These include catering companies, consultants, equipment rental firms, maintenance service providers, transport providers and anyone else who requests access to the corporate network. Then there are sub-contractors to think about. The IT team and legal department need to be involved from the start to risk assess third-party collaborations and ensure access if granted, is restricted to role-specific activities and reviewed regularly. Insider and Outsider Threat An organization's own staff can compromise a system's integrity either deliberately or accidentally. Deliberate attacks can be motivated by money, revenge, ideology or ego and can be among the most difficult to detect and stop. Organizations should employ a combination of technical and non-technical methods to limit insider threat. Technical measures include granting minimum access privileges and monitoring data flow and user behavior for anomalies (e.g. logging into a system at strange hours or uploading data from a system unrelated to their job role). One solution which can be used for this purpose is a privileged access management system (PAM). This is a centralized platform usually divided into three parts: an access manager, a session manager, and a password vault manager. The access manager component handles system access requests based on the company’s IAM (Identity and Access Management) policies. It is a good practice to assign users to specific roles and to limit access for each user to only those services and areas of the network they need to perform their role. The PAM system automates this process with any temporary extra permissions requiring senior authorization. The session manager component tracks user activity in real time and also stores it for future audit purposes. Suspicious user activity can be reported to super admins who can then terminate access. The password vault manager component protects the root passwords of each system and ensures users follow the company’s user password policy. Device management also plays an important part in access security. There is potentially a big security difference between an authorized user logging on to a system from a work desktop and the same user logging on to the same system via their mobile device. Non-technical strategies to tackle insider threat might include setting up a confidential forum for employees to report concerns and ensuring high-quality cyber security training is provided and regularly reviewed. When designing or choosing training packages, it is important to remember that not all employees will understand or be comfortable with the technical language, so all instructions and training should be stripped of jargon as far as possible. Another tip is to include plenty of hands-on training and real-life simulations. Some companies test employee vulnerability by having their IT department create a realistic phishing email and recording how many clicks it gets from employees. This will highlight which employees or departments need refresher training. Robust policies for any sensitive data physically leaving the premises are also important. Employees should not be able to take work devices, disks or flash drives off the premises without the company’s knowledge and this is even more important after an employee leaves the company. Data Protection Post-GDPR, data protection is more critical than ever. Failure to protect EU-based customer data from theft can expose organizations to over 20 million Euros worth of fines. Data needs to be secure both during transmission and while being stored. It also needs to be quickly and easily found and deleted if customers need to access their data or request its removal. This can be complex, especially for large organizations using cloud-based services. A full data audit is the first place to start before deciding what type of encryption is needed during data transfer and what security measures are necessary for stored data. For example, if your network has a demilitarized zone (DMZ), data in transit should always end here and there should be no protocols capable of spanning it. Sensitive customer data or mission-critical data can be secured at rest by encrypting it and then applying cryptographic hashes. Your audit should look at all components of your security provider. For example, problems with reporting threats can arise due to insufficient storage space for firewall logs. VPN Vulnerabilities Some organizations avoid transmitting data over the internet by setting up a VPN (Virtual Private Network). However, this does not mean that data is necessarily safe from cybercriminals. One big problem with most set-ups is that data will be routed over the internet should the VPN connection be dropped. A kill switch or network lock can help avoid this. VPNs may not be configured optimally and some may lack protection from various types of data leaks. These include DNS leaks, WebRTC, and IPV6 leaks. DNS leaks can occur if your VPN drops a connection and your browser defaults to default DNS settings, exposing your IP address. WebRTC, a fairly new technology, enables browsers to talk to one another without using a server. This requires each browser to know the other’s public IP address and some VPNs are not designed to protect from this type of leak. Finally, IPV6 leaks will happen if your VPN only handles IPV4 requests. Any IPV6 requests will be sent on to your PC which will automatically respond with your IP address. Most VPN leaks can be checked for using free online tools and your vendor should either be able to solve the issue or you may need to consider a different vendor. If you can, use L2TP (layer 2 tunneling protocol) or, OpenVPN rather than the more easily compromised PPTP (Point-to-Point Tunneling Protocol). Network Segmentation Industrial organizations tend to use network segmentation to isolate individual zones should a compromise happen. For example, this could immediately cut off all access to potentially dangerous machinery if an office-based CRM is hacked. The Purdue Model for Industrial Control Systems is the basis of ISA-99, a commonly referenced standard, which divides a typical ICS architecture into four to five zones and six levels. In the most basic model, an ICS is split into various area or cell zones which sit within an overall industrial zone. A demilitarized zone (DMZ) sits between this industrial zone and the higher level enterprise zone. Network segmentation is a complex task but is worth the investment. Once it is in place, the attack surface of your network will be reduced and monitoring for intrusions and responding to cyber incidents will be quicker and easier. Intrusion Detection Intrusion detection systems (IDS) are more proactive than simple firewalls, actively searching the network for signs of malicious activity. An IDS can be a hardware device or a software application and can use various detection techniques from identifying malware signatures to monitor deviations from normal traffic flow. The two most common classes of IDS are network intrusion detection systems (NIDS) and host-based intrusion detection systems (HIDS). While NIDS focus on incoming traffic, HIDS monitor existing files, and folders. Alarm filtering (AF) technology can help to sort genuine threats from false positives. When a system generates a warning for every anomaly it picks up, agents can find it hard to connect failures together to find the cause. This can also lead to alarm fatigue where the agent becomes desensitized to system alarms and misses a real threat. AF uses various means to pre-process system alarms so they can be better understood and acted upon. For example, related failures may be grouped together and then assigned to a priority list. System Hardening and Patch Management System hardening means locking down certain parts of a network or device or removing features to prevent access or to stop unwanted changes. Patching is a form of system hardening as it closes up vulnerabilities preventing them from being exploited. To defend their organization, the IT support team should define a clear patch management policy. Vendor updates should be applied as soon as possible and automated where they can. Author Bio Brent Whitfield is CEO of DCG Technical Solutions, Inc. DCG provides a host of IT services Los Angeles businesses depend upon whether they deploy in-house, cloud or hybrid infrastructure. Brent has been featured in Fast Company, CNBC, Network Computing, Reuters, and Yahoo Business. RSA Conference 2019 Highlights: Top 5 cybersecurity products announced Cybersecurity researcher withdraws public talk on hacking Apple’s Face ID from Black Hat Conference 2019: Reuters report 5 lessons public wi-fi can teach us about cybersecurity
Read more
  • 0
  • 0
  • 31240

article-image-6-common-challenges-faced-by-android-app-developers
Guest Contributor
21 Sep 2018
5 min read
Save for later

6 common challenges faced by Android App developers

Guest Contributor
21 Sep 2018
5 min read
The primary target for businesses while working on mobile apps is the Android platform, thanks to the massive market share the mobile operating system holds. It’s popularity can be attributed to the fact that it is open source and is regular updated with new enhancements and features. Android devices generally tend to differ based on the mobile hardware features even when powered by the same version of the Android OS. This is why it is essential that when developing apps for Android, developers create mobile apps capable of targeting a diverse range of mobile devices running on different versions of Android OS. During the various stages of planning, developing and testing, developers need to focus comprehensively on the apps functionality, accessibility, usability, performance, and security so that users can be engaged despite their choice of device. Also, they also need to look for ways to make the apps deliver a more personalized user experience across the various devices an operating system. Furthermore, developers need to understand and find solutions to the common challenges involved in android app development. Common Challenges Android App Developers Face 1. Hardware Features The Android OS is unlike any other mobile operating system. For one thing, it is an open source system. Alphabet gives manufacturers the leeway to customize the operating system to their specific needs. Also, there are no regulations on the devices being released by the different manufacturers. As a result, you can find various Android devices with different hardware features running on the same Android version. Two smartphones running on Android latest ver, for example, may have different screen resolutions, camera, screen size, and other hardware structures. During android app development, developers need to account for all of this to ensure the application delivers a personalized experience to each user. 2. Lack of Uniform User Interface Design Rules Since Google is yet to release any standard UI (user interface) design rules or process for mobile app developers, most developers don’t follow any standard UI development rules or procedure. Because developers are creating custom UI interfaces in their preferred way, a lot of apps tend to function or look different across different devices. This diversity and incompatibility of the UI usually affects the user experience that the Android app directly delivers. Smart developers prefer to go for a responsive layout that’ll keep the UI consistent across different devices. Moreover, developers need to test the UI of the app extensively by combining emulators and real mobile devices. Designing a UI that makes the app deliver the same user experience across varying Android devices is one of the more daunting challenges developers face. 3. API Incompatibility A lot of developers make use of third-party APIs to enhance the functionality and interoperability of a mobile device. Unfortunately, not all third-party APIs available for Android app development are of high quality.. Some APIs were created for a particular Android version and will not work on devices running on a different version of the operating system. Developers usually have to come up with ways to make a single API work on all Android versions, a task they often find to be very challenging. 4. Security Flaws As previously mentioned, Android is an open source software, and because of that, manufacturers find it easy to customize Android to their desired specifications. However, this openness and the massive market size makes Android a frequent target for security attacks. There have been several instances where the security of millions of Android mobile devices have been affected by security flaws and bugs like mRST, Stagefright, FakeID, ‘Certifi-gate,’ TowelRoot and Installer Hijacking. Developers need to include robust security features in their applications and utilize the latest encryption mechanisms to keep user information secure and out of the hands of hackers. 5. Search Engine Visibility The latest data from Statista shows that Google Play Store contains a higher number of mobile apps. Additionally, a large number of Android users prefer free apps than paid apps which is why developers need to promote their mobile applications to increase their download numbers and employ application monetization options. The best way to promote the app to reach their target audience is to use comprehensive digital marketing strategies. Most developers make use of digital marketing professionals to promote their apps aggressively. 6. Patent Issues Google doesn’t implement any guidelines for the evaluation of the quality of new apps that are getting submitted to the Play Store. This lack of a quality assessment guideline causes a lot of patent-related issues for developers. Some developers, to avoid patent issues, have to modify and redesign their apps in the future. As per my personal experience, I have tried to cover general challenges faced by Android app developers. I’m sure keeping wary of these challenges would help developers to build successful apps in the most hassle free way. Author Bio Harnil Oza is the CEO of Hyperlink InfoSystem, one of the leading app development companies in New York, USA and India who deliver mobile solutions mainly on Android and iOS platform. He regularly contributes his knowledge on leading blogging sites. LEGO launches BrickHeadz Builder AR, a new and free Android app to bring bricks and toys to life How Android app developers can convert iPhone apps How to Secure and Deploy an Android App
Read more
  • 0
  • 0
  • 31185