Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-react-16-9-releases-with-asynchronous-testing-utility-programmatic-profiler-and-more
Bhagyashree R
09 Aug 2019
3 min read
Save for later

React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more

Bhagyashree R
09 Aug 2019
3 min read
Yesterday, the React team announced the release of React 16.9. This release comes with an asynchronous testing utility, programmatic Profiler, and a few deprecations and bug fixes. Along with the release announcement, the team has also shared an updated React roadmap. https://twitter.com/reactjs/status/1159602037232791552 Following are some of the updates in React 16.9: Asynchronous act() method for testing React 16.8 introduced a new API method called ‘ReactTestUtils.act()’. This method enables you to write tests that involve rendering and updating components run closer to how React works in the browser. This release only supported synchronous functions. Whereas React 16.9 is updated to accept asynchronous functions as well. Performance measurements with <React.Profiler> Introduced in React 16.5, React Profiler API collects timing information about each rendered component to find performance bottlenecks in your application. This release adds a new “programmatic way” for gathering measurements called <React.Profiler>. It keeps track of the number of times a React application renders and how much it costs for rendering. With these measurements, developers will be able to identify the application parts that are slow and need optimization. Deprecations in React 16.9 Unsafe lifecycle methods are now renamed The legacy component lifecycle methods are renamed by adding an “UNSAFE_” prefix. This will indicate that if your code uses these lifecycle methods, it will be more likely to have bugs in React’s future releases. Following are the functions that have been renamed: ‘componentWillMount’ to ‘UNSAFE_componentWillMount’ ‘componentWillReceiveProps’ to ‘UNSAFE_componentWillReceiveProps’ ‘componentWillUpdate’ to ‘UNSAFE_componentWillUpdate’ This is not a breaking change, however, you will see a warning when using the old names. The team has also provided a ‘codemod’ script for automatically renaming these methods. javascript: URLs will now give a warning URLs that start with “javascript:” are deprecated as they can serve as “a dangerous attack surface”. As with the renamed lifecycle methods, these URLs will continue to work, but will show a warning. The team recommends developers to use React event handlers instead. “In a future major release, React will throw an error if it encounters a javascript: URL,” the announcement read. How does the React roadmap look like The React team has made some changes in the roadmap that was shared in November. React Hooks was released as per the plan, however, the team underestimated the follow-up work, and ended up extending the timeline by a few months. After React Hooks released in React 16.8, the team focused on Concurrent Mode and Suspense for Data Fetching, which are already being used in Facebook’s website. Previously, the team planned to split Concurrent Mode and Suspense for Data Fetching into two releases. Now both of these features will be released in a single version later this year. “We shipped Hooks on time, but we’re regrouping Concurrent Mode and Suspense for Data Fetching into a single release that we intend to release later this year,” the announcement read. To know more in detail about React 16.9, you can check out the official announcement. React 16.8 releases with the stable implementation of Hooks React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering” React Conf 2018 highlights: Hooks, Concurrent React, and more
Read more
  • 0
  • 0
  • 40856

article-image-bootstrap-5-to-replace-jquery-with-vanilla-javascript
Bhagyashree R
13 Feb 2019
2 min read
Save for later

Bootstrap 5 to replace jQuery with vanilla JavaScript

Bhagyashree R
13 Feb 2019
2 min read
The upcoming major version of Bootstrap, version 5, will no longer have jQuery as a dependency and will be replaced with vanilla JavaScript. In 2017, the Bootstrap team opened a pull request with the aim to remove jQuery entirely from the Bootstrap source and it is now near completion. Under this pull request, the team has removed jQuery from 11 plugins including Util, Alert, Button, Carousel, and more. Using ‘Data’ and ‘EventHandler’ in unit tests is no longer supported. Additionally, Internet Explorer will not be compatible with this version. Despite these updates, developers will be able to use this version both with or without jQuery. Since this will be a major release, users can expect a few breaking changes. Not only just Bootstrap but many other companies have been thinking of decoupling from jQuery. For example, last year, GitHub incrementally removed jQuery from their frontend mainly because of the rapid evolution of web standards and jQuery losing its relevancy over time. This news triggered a discussion on Hacker News, and many users were happy about this development. One user commented, “I think the reason is that many of the problems jQuery was designed to solve (DOM manipulation, cross-browser compatibility issues, AJAX, cool effects) have now been implemented as standards, either in Javascript or CSS and many developers consider the 55k minified download not worth it.” Another user added, “The general argument now is that 95%+ of jQuery is now native in browsers (with arguably the remaining 5% being odd overly backward compatible quirks worth ignoring), so adding a JS dependency for them is "silly" and/or a waste of bandwidth.” Read more in detail, check out Bootstrap’s GitHub repository. jQuery File Upload plugin exploited by hackers over 8 years, reports Akamai’s SIRT researcher GitHub parts ways with JQuery, adopts Vanilla JS for its frontend Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 40795

article-image-whats-new-in-ecmascript-2018
Pravin Dhandre
20 Apr 2018
4 min read
Save for later

What's new in ECMAScript 2018 (ES9)?

Pravin Dhandre
20 Apr 2018
4 min read
ECMAScript 2018 -  also known as ES9 - is now complete with lots of features. Since the major features released with ECMAScript 2015 the language has matured with yearly update releases. After multiple draft submissions and completion of the 5-stage process, the TC39 committee has finalized the set of features that will be rolled out in June. The full list of proposals that were submitted to TC39 committee can be viewed in Github repository. What are the key features of ECMAScript 2018? Let’s have a quick look at the key ES9 features and how it is going to add value to web developers. Lifting template literal restriction Template literals generally allow the embedding of languages such as DSLs. However, restrictions on escape sequences make this quite complicated. Removing the restriction will create difficulty in handling cooked template values containing illegal escape sequences. The proposed feature will redefine the cooked value for illegal escape sequences to “undefined”. This lifting of restriction will allow illegal values like \xerxes and makes embedding of language simpler. The detailed proposal with templates can be viewed at Github. Asynchronous iterators The newer version will provide syntactic support for asynchronous iteration with both AsyncIterable and AsyncIterator protocols. The syntactic support will help in reading lines of text from HTTP connection easily. In my opinion, this is one of the most important and useful features which make the code looks simpler. It introduces a new IterationStatement, for-await-of, and also adds syntax which can create async generator functions. The detailed proposal can be viewed at Github. Promise.prototype.finally library As you are aware promise make execution of callback functions easy. Many promise libraries have a "finally" method through which you can run code no matter how the Promise provides resolution. It works by registering a callback which gets invoked when a promise gets fulfilled or denied. Bluebird, Q, and when are some examples. The detailed proposal can be viewed at Github. Unicode property escapes in regular expressions The ECMAScript 2018 version will have addition of Unicode property escapes `\p{…}` and `\P{…}` to regular expressions. These are a new and unique type of escape sequences with u flag set. With this feature, one can create Unicode-aware regular expressions with utmost ease. These escapes are easily readable, compact and get updated automatically from ECMAScript engine. The detailed proposal can be viewed at Github. RegExp lookbehind assertions Assertions are regular expressions which consist of anchors and lookarounds that either succeeds or fails based on the match found. ECMAScript has extended assertion, that does lookaround in forward direction, with lookbehind assertions that does in backward direction. This assertion will be helpful in instances like validating a dollar amount without capturing the dollar sign where a pattern/design is or is not preceded by another. The detailed proposal can be viewed at Github. Object Rest/spread properties The earlier version of ECMAScript includes rest and spread properties for array literals. Likewise, the newer version would be introducing rest and so read elements for object literals. Both these operations for Object would help in extracting properties which we want along with removing unwanted ones. The detailed proposal can be viewed at Github. RegExp named capture groups Capture Groups is another RegExp feature, similar to so called “named Groups” in Java and Python. With this, you can write RegExp to provide names in a format viz. (?<name>...) for different parts of the group. This allows you to use that name and grab whichever group you need in a simplistic way. The detailed proposal can be viewed at Github. s ‘dotAll’ flag for regular expressions In regular expression patterns, the earlier version allows dot (.) to match any character but with astral and line terminator characters like \n \f etc, creating regex was complicated. The newer version proposes addition of a new s flag which can match any character, including astral and line terminators. The detailed proposal can be viewed at Github. When will ECMAScript 2018 be available? All of the features above are expected to be implemented and available in browsers this year. It's in the name after all. But there are likely to be even more new features and capabilities in the 2019 release. Read this to get a clearer picture of what’s likely to feature in the 2019 release.
Read more
  • 0
  • 0
  • 38158

article-image-nuxt-js-2-0-released-with-a-new-scaffolding-tool-webpack-4-upgrade-and-more
Bhagyashree R
24 Sep 2018
3 min read
Save for later

Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more!

Bhagyashree R
24 Sep 2018
3 min read
Last week, the Nuxt.js community announced the release of Nuxt.js 2.0 with major improvements. This release comes with a scaffolding tool, create-nuxt-app to quickly get you started with Nuxt.js development. To provide a faster boot-up and re-compilation, this release is upgraded to Webpack 4 (Legato) and Babel 7. Nuxt.js is an open source web application framework for creating Vue.js applications. You can choose between universal, static generated or single page application. What is new in Nuxt.js 2.0? Introduced features and upgradations create-nuxt-app To get you quickly started with Nuxt.js development, you can use the newly introduced create-nuxt-app tool. This tool includes all the Nuxt templates such as starter template, express templates, and so on. With create-nuxt-app you can choose between integrated server-side framework, UI frameworks, and add axios module. Introducing nuxt-start and nuxt-legacy To start Nuxt.js application in production mode nuxt-start is introduced. To support legacy build of Nuxt.js for Node.js < 8.0.0,  nuxt-legacy is added. Upgraded to Webpack 4 and Babel 7 To provide faster boot-up time and faster re-compilation, this release uses Webpack 4 (Legato) and Babel 7. ESM supported everywhere In this release, ESM is supported everywhere. You can now use export/import syntax in nuxt.config.js, serverMiddleware, and modules. Replace postcss-next with postcss-preset-env Due to the deprecation of cssnext, you have to use postcss-preset-env instead of postcss-cssnext. Use ~assets instead of ~/assets Due to css-loader upgradation, use ~assets instead of ~/assets for alias in <url> CSS data type, for example, background: url("~assets/banner.svg"). Improvements The HTML script tag in core/renderer.js is fixed to pass W3C validation. The background-color property is now replaced with background in loadingIndicator, to allow the use of images and gradients for your background in SPA mode. Due to server/client artifact isolation, external build.publicPath need to upload built content to .nuxt/dist/client directory instead of .nuxt/dist. webpackbar and consola provide a improved CLI experience and better CI compatibility. Template literals in lodash templates are disabled. Better error handling if the specified plugin isn't found. Deprecated features The vendor array isn't supported anymore. DLL support is removed because it was not stable enough. AggressiveSplittingPlugin is obsoleted, users can use optimization.splitChunks.maxSize instead. The render.gzip option is deprecated. Users can use render.compressor instead. To read more about the updates, check out Nuxt’s official announcement on Medium and also see the release notes on its GitHub repository. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly low.js, a Node.js port for embedded systems Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js
Read more
  • 0
  • 0
  • 33076

article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 33002

article-image-mozilla-building-bridge-rust-javascript
Richard Gall
10 Apr 2018
2 min read
Save for later

Mozilla is building a bridge between Rust and JavaScript

Richard Gall
10 Apr 2018
2 min read
Mozilla is building a bridge between Rust and JavaScript. The bridge - Wasm-bindgen - allows Rust to interact with Web Assembly modules and JavaScript. One of Mozilla's aims is to turn Rust into a language for web development; building a link to WebAssembly is perhaps the most straightforward way for it to run on the web. But this doesn't mean Mozilla wants Rust to replace JavaScript - far from it. The plan is instead for Rust to be a language that runs alongside JavaScript on the backend. Read more about Mozilla's plan here. What is wasm-bindgen? It's worth reading this from the Mozilla blog to see the logic behind wasm-bidgen and how it's going to work: "Today the WebAssembly specification only defines four types: two integer types and two floating-point types. Most of the time, however, JS and Rust developers are working with much richer types! For example, JS developers often interact with document to add or modify HTML nodes, while Rust developers work with types like Result for error handling, and almost all programmers work with strings. Being limited only to the types that WebAssembly provides today would be far too restrictive, and this is where wasm-bindgen comes into the picture. The goal of wasm-bindgen is to provide a bridge between the types of JS and Rust. It allows JS to call a Rust API with a string, or a Rust function to catch a JS exception. wasm-bindgen erases the impedance mismatch between WebAssembly and JavaScript, ensuring that JavaScript can invoke WebAssembly functions efficiently and without boilerplate, and that WebAssembly can do the same with JavaScript functions." This is exciting as it is a valuable step in expanding the capabilities of Rust. But the thinking behind wasm-bindgen will also help to further drive the adoption of web assembly. While the focus is, of course, on Rust at the moment, there's a possibility that wasm-bindgen could eventually be used with other languages, such as C and C++. This might take some time, owever. Download wasm-bindgen from GitHub.
Read more
  • 0
  • 0
  • 32275
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-react-introduces-hooks-a-javascript-function-to-allow-using-react-without-classes
Bhagyashree R
26 Oct 2018
3 min read
Save for later

React introduces Hooks, a JavaScript function to allow using React without classes

Bhagyashree R
26 Oct 2018
3 min read
Yesterday, the React community introduced Hooks, a new feature proposal which has landed in the React 16.7.0-alpha. With the help of Hooks, you will be able “hook into” or use React state and other React features from function components. The biggest advantage is that Hooks don’t work inside classes and let you use React without classes. Why Hooks are being introduced? Easily reuse React components Currently, there is no way to attach reusable behavior to a component. There are some patterns like render props and high-order components that try to solve this problem to some extent. But you need to restructure your components to use them. Hooks make it easier for you to extract stateful logic from a component so it can be tested independently and reused. All of this, without having to change your component hierarchy. You can also easily share them among many components or with the community. Splitting related components In React, each lifecycle method often contains a mix of unrelated logic. And many times the mutually unrelated code that changes together splits apart, but completely unrelated code ends up combined in a single method. This could end up introducing bugs and inconsistencies. Rather than forcing a component split based on lifecycle methods, Hooks allow you to split them into smaller functions based on what pieces are related. Use React without classes One of the hurdles that people face while learning React is classes. You need to understand that how this works in JavaScript is very different from how it works in most languages. You also have to remember to bind the event handlers. Hooks solve these problems by letting you use more of React’s features without classes. React components have always been closer to functions. It embraces functions, but without sacrificing the practical spirit of React. With Hooks, you will get access to imperative escape hatches that don’t require you to learn complex functional or reactive programming techniques. Hooks are completely opt-in and are 100% backward compatible. After receiving the community feedback, which seems to be positive going by their ongoing discussion on RFC, this feature, which is currently in alpha release, will be introduced in React 16.7. To know more in detail about React Hooks, check out their official announcement. React 16.6.0 releases with a new way of code splitting, and more! InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out RxDB 8.0.0, a reactive, offline-first, multiplatform database for JavaScript released!
Read more
  • 0
  • 0
  • 31226

article-image-after-backlash-for-rejecting-a-ublock-origin-update-from-the-chrome-web-store-google-accepts-ad-blocking-extension
Bhagyashree R
15 Oct 2019
6 min read
Save for later

After backlash for rejecting a uBlock Origin update from the Chrome Web Store, Google accepts ad-blocking extension

Bhagyashree R
15 Oct 2019
6 min read
Last week, Raymond Hill, the developer behind uBlock Origin shared that the extension’s dev build 1.22.5rc1 was rejected by Google's Chrome Web Store (CWS). uBlock Origin is a free and open-source browser extension widely used for content-filtering and adblocking.  Google stated that the extension did not comply with its extension standards as it bundles up different purposes into a single extension. An email to Hill from Google reads, “Do not create an extension that requires users to accept bundles of unrelated functionality, such as an email notifier and a news headline aggregator.” Hill on a GitHub issue mentioned that this is basically “stonewalling” and in the future, users may have to switch to another browser to use uBlock Origin. He does plans to upload the stable version. He commented, “I will upload stable to the Chrome Web Store, but given 1.22.5rc2 is rejected, logic dictates that 1.23.0 will be rejected. Actually, logic dictates that 1.22.5rc0 should also be rejected and yet it's still available in the Chrome Web Store.” Users’ reaction on Google rejecting the uBlock Origin dev build This news sparked a discussion on Hacker News and Reddit. Users speculated that probably this outcome is the result of the “crippling” update Google has introduced in Chrome (beta and dev versions currently): deprecating the blocking ability of the WebRequest API. The webRequest API permits extensions to intercept requests to modify, redirect, or block them. The basic flow of handling a request using this API is, Chrome receives the request, asks the extension, and then gets the result. In Manifest V3, the use of this API will be limited in its blocking form. While the non-blocking form of the API, which permit extensions to observe network requests will be allowed.  In place of webRequest API, Google has introduced the declarativeNetRequest API. This API allows adding up to 30,000 rules, 5000 dynamic rules, and 100 pages. Due to its limiting nature, many ad blocker developers and maintainers have expressed that this API will impact the capabilities of modern content blocking extensions. Google’s reasoning for introducing this change is that this API is much more performant and provides better privacy guarantees. However, many developers think otherwise. Hill had previously shared his thoughts on deprecating the blocking ability of the webRequest API.  “Web pages load slow because of bloat, not because of the blocking ability of the webRequest API -- at least for well-crafted extensions. Furthermore, if performance concerns due to the blocking nature of the webRequest API was their real motive, they would just adopt Firefox's approach and give the ability to return a Promise on just the three methods which can be used in a blocking manner.” Many users also mentioned that Chrome is using its dominance in the browser market to dictate what type of extensions are developed and used. A user commented, “As Chrome is a dominant platform, our work is prevented from reaching users if it does not align with the business goals of Google, and extensions that users want on their devices are effectively censored out of existence.” Others expressed that it is better to avoid all the drama by simply switching to some other browser, mainly Firefox. “Or you could cease contributing to the Blink monopoly on the web and join us of Firefox. Microsoft is no longer challenging Google in this space,” a user added. While some others were in support of Google saying that Hill could have moved some of the functionalities to a separate extension. “It's an older rule. It does technically apply here, but it's not a great look that they're only enforcing it now. If Gorhill needed to, some of that extra functionality could be moved out into a separate extension. uBlock has done this before with uBlock Origin Extra. Most of the extra features (eg. remote font blocking) aren't a huge deal, in my opinion.” How Google reacted to the public outcry Simeon Vincent, a developer advocate for Chrome extensions commented on a Reddit discussion that the updated extension was approved and published on the Chrome Web Store.  “This morning I heard from the review team; they've approved the current draft so next publish should go through. Unfortunately it's the weekend, so most folks are out, but I'm planning to follow up with u/gorhill4 with more details once I have them. EDIT: uBlock Origin development build was just successfully published. The latest version on the web store is 1.22.5.102.” He also further said that this whole confusion was because of a “clunkier” developer communication process. When users asked him about the Manifest V3 change he shared, “We've made progress on better supporting ad blockers and content blockers in general in Manifest V3. We've added rule modification at runtime, bumped the rule limits, added redirect support, header modification, etc. And more improvements are on the way.” He further added, “But Gorhill's core objection is to removing the blocking version of webRequest. We're trying to move the extension platform in a direction that's more respectful of end-user privacy, more secure, and less likely to accidentally expose data – things webRequest simply wasn't designed to do.” Chrome ignores the autocomplete=off property In other Chrome related news, it was reported that Chrome continues to autofill forms even if you disable it using the autocomplete=off property. A user commented, “I've had to write enhancements for Web apps several times this year with fields which are intended to be filled by the user with information *about other users*. Not respecting autocomplete="off" is a major oversight which has caused a lot of headache for those enhancements.” Chrome decides on which field should be filled with what data based on a combination of form and field signatures. If these do not match, the browser will resort to only checking the field signatures.  A developer from the Google Chrome team shared, “This causes some problems, e.g. in input type="text" name="name", the "name" can refer to different concepts (a person's name or the name of a spare part).” To solve this problem the team is working on an experimental feature that gives users the choice to “(permanently) hide the autofill suggestions.”  Check out the reported issue to know more in detail. Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end GitHub updates to Rails 6.0 with an incremental approach React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!
Read more
  • 0
  • 0
  • 31002

article-image-google-chrome-76-now-supports-native-lazy-loading
Bhagyashree R
27 Aug 2019
4 min read
Save for later

Google Chrome 76 now supports native lazy-loading

Bhagyashree R
27 Aug 2019
4 min read
Earlier this month, Google Chrome 76 got native support for lazy loading. Web developers can now use the new ‘loading’ attribute to lazy-load resources without having to rely on a third-party library or writing a custom lazy-loading code. Why native lazy loading is introduced Lazy loading aims to provide better web performance in terms of both speed and consumption of data. Generally, images are the most requested resources on any website. Some web pages end up using a lot of data to load images that are out of the viewport. Though this might not have much effect on a WiFi user, this could surely end up consuming a lot of cellular data. Not only images, but out-of-viewport embedded iframes can also consume a lot of data and contribute to slow page speed. Lazy loading addresses this problem by deferring the non-critical, below-the-fold images and iframe loads until the user scrolls closer to them. This results in faster web page loading, minimized bandwidth for users, and reduced memory usage. Previously, there were a few ways to defer the loading of images and iframes that were out of the viewport. You could use the Intersection Observer API or the ‘data-src’ attribute on the 'img' tag. Many developers also built third-party libraries to provide abstractions that are even easier to use. Bringing native support, however, eliminates the need for an external library. It also ensures that the deferred loading of images and iframes still work even if JavaScript is disabled on the client. How you can use lazy loading Without this feature, Chrome already loads images at different priorities depending on their location with respect to the device viewport. This new ‘loading’ attribute, however, allows developers to completely defer the loading of images and iframes until the user scrolls near them. The distance-from-viewport threshold is not fixed and depends on the type of resources being fetched, whether Lite mode is enabled, and the effective connection type. There are default values assigned for effective connection type in the Chromium source code that might change in a future release. Also, since the images are lazy-loaded, there are chances of content reflow. To prevent this, developers are advised to set width and height for the images. You can assign any one of the following three values to the ‘loading’ attribute: ‘auto’: This represents the default behavior of the browser and is equivalent to not including the attribute. ‘lazy’: This will defer the loading of the images and iframes until it reaches a calculated distance from the viewport. ‘eager’: This will load the resource immediately. Support for native lazy loading in Chrome 76 got mixed reactions from users. A user commented on Hacker News, “I'm happy to see this. So many websites with lazy loading never implemented a fallback for noscript. And most of the popular libraries didn't account for this accessibility.” Another user expressed that it does hinder user experience. They commented, “I may be the odd one out here, but I hate lazy loading. I get why it's a big thing on cellular connections, but I do most of my browsing on WIFI. With lazy loading, I'll frequently be reading an article, reach an image that hasn't loaded in yet, and have to wait for it, even though I've been reading for several minutes. Sometimes I also have to refind my place as the whole darn page reflows. I wish there was a middle ground... detect I'm on WIFI and go ahead and load in the lazy stuff after the above the fold stuff.” Right now, Chrome is the only browser to support native lazy loading. However, other browsers may follow the suit considering Firefox has an open bug for implementing lazy loading and Edge is based on Chromium. Why should your e-commerce site opt for Headless Magento 2? Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Angular 8.0 releases with major updates to framework, Angular Material, and the CLI
Read more
  • 0
  • 0
  • 30831

article-image-symfony-leaves-php-fig-the-framework-interoperability-group
Amrata Joshi
21 Nov 2018
2 min read
Save for later

Symfony leaves PHP-FIG, the framework interoperability group

Amrata Joshi
21 Nov 2018
2 min read
Yesterday, Symfony, a community of 600,000 developers from more than 120 countries, announced that it will no longer be a member of the PHP-FIG, a framework interoperability group. Prior to Symfony, the other major members to leave this group include, Laravel, Propel, Guzzle, and Doctrine. The main goal of the PHP-FIG group is to work together and maintain interoperability, discuss commonalities between projects and work together to make them better. Why Symfony is leaving PHP-FIG PHP-FIG has been working on various PSRs (PHP Standard Recommendations). Kévin Dunglas, a core team member at Symfony, said, “It looks like it's not the goal anymore, 'cause most (but not all) new PSRs are things no major frameworks ask for, and that they can't implement without breaking their whole ecosystem.” https://twitter.com/fabpot/status/1064946913596895232 The fact that the major contributors left the group could possibly be a major reason for Symfony to quit. But it seems many are disappointed by this move of Symfony as they aren’t much satisfied by the reason given. https://twitter.com/mickael_andrieu/status/1065001101160792064 The matter of concern for Symfony was that the major projects were not getting implemented as a combined effort. https://twitter.com/dunglas/status/1065004250005204998 https://twitter.com/dunglas/status/1065002600402247680 Something similar happened while working towards PSR 7, where no commonalities between the projects were given importance. Instead, it was considered as a new separate framework. https://twitter.com/dunglas/status/1065007290217058304 https://twitter.com/titouangalopin/status/1064968608646864897 People are still arguing over why Symfony quit. https://twitter.com/gmponos/status/1064985428300914688 Will the PSRs die? With the latest move by Symfony, there are various questions raised towards the next step the company might take. Will the company still support PSRs or is it the end for the PSRs? Kévin Dunglas has answered to this question in one of his tweets, where he said, “Regarding PSRs, I think we'll implement them if relevant (such as PSR-11) but not the ones not in the spirit of a broad interop (as PSR-7/14).” To know more about this news, check out Fabien Potencier’s Twitter thread Perform CRUD operations on MongoDB with PHP Introduction to Functional Programming in PHP Building a Web Application with PHP and MariaDB – Introduction to caching
Read more
  • 0
  • 0
  • 30207
article-image-apple-shares-tentative-goals-for-webkit-2020
Sugandha Lahoti
11 Nov 2019
3 min read
Save for later

Apple shares tentative goals for WebKit 2020

Sugandha Lahoti
11 Nov 2019
3 min read
Apple has released a list of tentative goals for WebKit in 2020 catering to WebKit users as well as Web, Native, and WebKit Developers. These features are tentative and there is no guarantee if these updates will ship at all. Before releasing new features, Apple looks at a number of factors that are arranged according to a plan or system. They look at developer interests and harmful aspects associated with a feature. Sometimes they also take feedback/suggestions from high-value websites. WebKit 2020 enhancements for WebKit users Primarily, WebKit is focused on improving performance as well as privacy and security. Some performance ideas suggested include Media query change handling, No sync IPC for cookies, Fast for-of iteration, Turbo DFG, Async gestures, Fast scrolling on macOS, Global GC, and Service Worker declarative routing. For privacy, Apple is focusing on improving Address ITP bypasses, logged in API, in-app browser privacy, and PCM with fraud prevention. They are also working on improving Authentication, Network Security, JavaScript Hardening, WebCore Hardening, and Sandbox Hardening. Improvements in WebKit 2020 for Web Developers For web platforms, the focus is on three qualities - Catchup, Innovation, and Quality. Apple is planning to bring improvements in Graphics and Animations (CSS overscroll-behavior, WebGL 2, Web Animations), Media (Media Session Standard MediaStream Recording, Picture-in-Picture API) and DOM, JavaScript, and Text. They are also looking to improve CSS Shadow Parts, Stylable pieces, JS builtin modules, Undo Web API and also work on WPT (Web Platform Tests). Changes suggested for Native Developers For Native Developers in the obsolete legacy WebKit, the following changes are suggested: WKWebView API needed for migration Fix cookie flakiness due to multiple process pools WKWebView APIs for Media Enhancements for WebKit Developers The focus is on improving Architecture health and service & tools. Changes suggested are: Define “intent to implement” style process Faster Builds (finish unified builds) Next-gen layout for line layout Regression Test Debt repayment IOSurface in Simulator EWS (Early Warning System) Improvements Buildbot 2.0 WebKit on GitHub as a project (year 1 of a multi-year project) On Hacker News, this topic was widely discussed with people pointing out what they want to see in WebKit. “Two WebKit goals I'd like to see for 2020: (1) Allow non-WebKit browsers on iOS (start outperforming your competition instead of merely banning your competition), and (2) Make iOS the best platform for powerful web apps instead of the worst, the leader instead of the spoiler.” Another pointed, “It would be great if SVG rendering, used for diagrams, was of equal quality to Firefox.” One said, “WebKit and the Safari browsers by extension should have full and proper support for Service Workers and PWAs on par with other browsers.” For a full list of updates, please see the WebKit Wiki page. Apple introduces Swift Numerics to support numerical computing in Swift Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications
Read more
  • 0
  • 0
  • 29761

article-image-build-achatbot-with-microsoft-bot-framework
Kunal Chaudhari
27 Apr 2018
8 min read
Save for later

How to build a chatbot with Microsoft Bot framework

Kunal Chaudhari
27 Apr 2018
8 min read
The Microsoft Bot Framework is an increbible tool from Microsoft. It makes building chatbots easier and more accessible than ever. That means you can build awesome conversational chatbots for a range of platforms, including Facebook and Slack. In this tutorial, you'll learn how to build an FAQ chatbot using Microsoft Bot Framework and ASP.NET Core. This tutorial has been taken from .NET Core 2.0 By Example. Let's get started. You're chatbot that can respond to simple queries such as: How are you? Hello! Bye! This should provide a good foundation for you to go further and build more complex chatbots with the Microsoft Bot Framework, The more you train the Bot and the more questions you put in its knowledge base, the better it will be. If you're a UK based public sector organisation then ICS AI offer conversational AI solutions built to your needs. Their Microsoft based infrastructure runs chatbots augmented with AI to better serve general public enquiries. Build a basic FAQ Chabot with Microsoft Bot Framework First of all, we need to create a page that can be accessed anonymously, as this is frequently asked questions (FAQ ), and hence the user should not be required to be logged in to the system to access this page. To do so, let's create a new controller called FaqController in our LetsChat.csproj. It will be a very simple class with just one action called Index, which will display the FAQ page. The code is as follows: [AllowAnonymous] public class FaqController : Controller { // GET: Faq public ActionResult Index() { return this.View(); } } Notice that we have used the [AllowAnonymous] attribute, so that this controller can be accessed even if the user is not logged in. The corresponding .cshtml is also very simple. In the solution explorer, right-click on the Views folder under the LetsChat project and create a folder named Faq and then add an Index.cshtml file in that folder. The markup of the Index.cshtml would look like this: @{ ViewData["Title"] = "Let's Chat"; ViewData["UserName"] = "Guest"; if(User.Identity.IsAuthenticated) { ViewData["UserName"] = User.Identity.Name; } } <h1> Hello @ViewData["UserName"]! Welcome to FAQ page of Let's Chat </h1> <br /> Nothing much here apart from the welcome message. The message displays the username if the user is authenticated, else it displays Guest. Now, we need to integrate the Chatbot stuff on this page. To do so, let's browse http://qnamaker.ai. This is Microsoft's QnA (as in questions and answers) maker site which a free, easy-to-use, REST API and web-based service that trains artificial intelligence (AI) to respond to user questions in a more natural, conversational way. Compatible across development platforms, hosting services, and channels, QnA Maker is the only question and answer service with a graphical user interface—meaning you don’t need to be a developer to train, manage, and use it for a wide range of solutions. And that is what makes it incredibly easy to use. You would need to log in to this site with your Microsoft account (@microsoft/@live/@outlook). If you don't have one, you should create one and log in. On the very first login, the site would display a dialog seeking permission to access your email address and profile information. Click Yes and grant permission: You would then be presented with the service terms. Accept that as well. Then navigate to the Create New Service tab. A form will appear as shown here: The form is easy to fill in and provides the option to extract the question/answer pairs from a site or .tsv, .docx, .pdf, and .xlsx files. We don't have questions handy and so we will type them; so do not bother about these fields. Just enter the service name and click the Create button. The service should be created successfully and the knowledge base screen should be displayed. We will enter probable questions and answers in this knowledge base. If the user types a question that resembles the question in the knowledge base, it will respond with the answer in the knowledge base. Hence, the more questions and answers we type, the better it will perform. So, enter all the questions and answers that you wish to enter, test it in the local Chatbot setup, and, once you are happy with it, click on Publish. This would publish the knowledge bank and share the sample URL to make the HTTP request. Note it down in a notepad. It contains the knowledge base identifier guide, hostname, and subscription key. With this, our questions and answers are ready and deployed. We need to display a chat interface, pass the user-entered text to this service, and display the response from this service to the user in the chat user interface. To do so, we will make use of the Microsoft Bot Builder SDK for .NET and follow these steps: Download the Bot Application project template from http://aka.ms/bf-bc-vstemplate. Download the Bot Controller item template from http://aka.ms/bf-bc-vscontrollertemplate. Download the Bot Dialog item template from http://aka.ms/bf-bc-vsdialogtemplate. Next, identify the project template and item template directory for Visual Studio 2017. The project template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesProjectTemplatesVisual C# and the item template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesItemTemplatesVisual C#. Copy the Bot Application project template to the project template directory. Copy the Bot Controller ZIP and Bot Dialog ZIP to the item template directory. In the solution explorer of the LetsChat project, right-click on the solution and add a new project. Under Visual C#, we should now start seeing a Bot Application template as shown here: Name the project FaqBot and click OK. A new project will be created in the solution, which looks similar to the MVC project template. Build the project, so that all the dependencies are resolved and packages are restored. If you run the project, it is already a working Bot, which can be tested by the Microsoft Bot Framework emulator. Download the BotFramework-Emulator setup executable from https://github.com/Microsoft/BotFramework-Emulator/releases/. Let's run the Bot project by hitting F5. It will display a page pointing to the default URL of http://localhost:3979. Now, open the Bot framework emulator and navigate to the preceding URL and append api/messages; to it, that is, browse to http://localhost:3979/api/messages and click Connect. On successful connection to the Bot, a chat-like interface will be displayed in which you can type the message. The following screenshot displays this step:   We have a working bot in place which just returns the text along with its length. We need to modify this bot, to pass the user input to our QnA Maker service and display the response returned from our service. To do so, we will need to check the code of MessagesController in the Controllers folder. We notice that it has just one method called Post, which checks the activity type, does specific processing for the activity type, creates a response, and returns it. The calculation happens in the Dialogs.RootDialog class, which is where we need to make the modification to wire up our QnA service. The modified code is shown here: private static string knowledgeBaseId = ConfigurationManager.AppSettings["KnowledgeBaseId"]; //// Knowledge base id of QnA Service. private static string qnamakerSubscriptionKey = ConfigurationManager.AppSettings["SubscriptionKey"]; ////Subscription key. private static string hostUrl = ConfigurationManager.AppSettings["HostUrl"]; private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<object> result) { var activity = await result as Activity; // return our reply to the user await context.PostAsync(this.GetAnswerFromService(activity.Text)); context.Wait(MessageReceivedAsync); } private string GetAnswerFromService(string inputText) { //// Build the QnA Service URI Uri qnamakerUriBase = new Uri(hostUrl); var builder = new UriBuilder($"{qnamakerUriBase}/knowledgebases /{knowledgeBaseId}/generateAnswer"); var postBody = $"{{"question": "{inputText}"}}"; //Add the subscription key header using (WebClient client = new WebClient()) { client.Headers.Add("Ocp-Apim-Subscription-Key", qnamakerSubscriptionKey); client.Headers.Add("Content-Type", "application/json"); try { var response = client.UploadString(builder.Uri, postBody); var json = JsonConvert.DeserializeObject<QnAResult> (response); return json?.answers?.FirstOrDefault().answer; } catch (Exception ex) { return ex.Message; } } } The code is pretty straightforward. First, we add the QnA Maker service subscription key, host URL, and knowledge base ID in the appSettings section of Web.config. Next, we read these app settings into static variables so that they are available always. Next, we modify the MessageReceivedAsync method of the dialog to pass the user input to the QnA service and return the response of the service back to the user. The QnAResult class can be seen from the source code. This can be tested in the emulator by typing in any of the questions that we have stored in our knowledge base, and we will get the appropriate response, as shown next: Our simple FAQ bot using the Microsoft Bot Framework and ASP.NET Core 2.0 is now ready! Read more about building chatbots: How to build a basic server side chatbot using Go
Read more
  • 0
  • 1
  • 29616

article-image-mozilla-and-google-chrome-refuse-to-support-gabs-dissenter-extension-for-violating-acceptable-use-policy
Bhagyashree R
12 Apr 2019
5 min read
Save for later

Mozilla and Google Chrome refuse to support Gab’s Dissenter extension for violating acceptable use policy

Bhagyashree R
12 Apr 2019
5 min read
Earlier this year, Gab, the “free speech” social network and a popular forum for far-right viewpoint holders and other fringe groups, launched a browser extension named Dissenter that creates an alternative comment section for any website. The plug-in is now removed from the extension stores of both Mozilla and Google, as the extension violates their acceptable use policy. This decision comes after Columbia Journalism Review reported about the extension to the tech giants. https://twitter.com/nausjcaa/status/1116409587446484994 The Dissenter plug-in, which goes by the tagline “the comment section of the internet”, allows users to discuss any topic in real-time without fearing that their posted comment will be removed by a moderator. The plug-in failed to pass the review process of Mozilla and is now disabled for Firefox users. But, the users who have already installed the plug-in can continue to use it. The Gab team took to Twitter complaining about Mozilla’s Acceptable Use Policy. https://twitter.com/getongab/status/1116036111296544768 When asked for more clarity on which policies Dissenter did not comply with, Mozilla said that they received abuse reports for this extension. It further added that the platform is being used for promoting violence, hate speech, and discrimination, but they failed to show any examples to add any credibility to their claims. https://twitter.com/getongab/status/1116088926559666181 The extension developers responded by saying that they do moderate any illegal conduct or posts happening on their platform as and when they are brought to their attention. “We do not display content containing words from a list of the most offensive racial epithets in the English language,” added the Gab developers. Soon after this, Google Chrome also removed the extension from Chrome Extension Store stating the same reason that the extension does not comply with their policies. After getting deplatformed, the Dissenter team has come to the conclusion that the best way forward is to create their own browser. They are thinking of forking Chromium or the privacy-focused web browser, Brave. “That’s it. We are going to fork Chromium and create a browser with Dissenter, ad blocking, and other privacy tools built in along with the guarantee of free speech that Silicon Valley does not provide.” https://twitter.com/getongab/status/1116308126461046784 Gab does not moderate views posted by its users until they are flagged for any violations and says it “treats its users as adults”. So, until people are complaining, the platform will not take any appropriate action against the threats and hate speech posted in the comments. Though it is known for its tolerance for fringe views and has drawn immense heat from the public, things took turn for the worse after the recent Christchurch shooting. A far-right extremist who shot dead 20+ Muslims and left 30 others injured in two Mosques in New Zealand, had shared his extremist manifesto on social media sites like Gab and 8chan. He had also live-streamed the shooting on Facebook, Youtube, and others. This is not the first time when Gab has been involved in a controversy. Back in October last year, PayPal banned Gab following the anti-Semitic mass shooting in Pittsburgh. It was reported that the shooter was an active poster on the Gab website and has hinted his intentions shortly before the attack. In the same month, hosting provider Joyent also suspended its services for Gab. The platform has also been warned by Stripe for the violations of their policies. Torba, the co-founder of Gab, said, “Payments companies like Paypal, Stripe, Square, Cash App, Coinbase, and Bitpay have all booted us off. Microsoft Azure, Joyent, GoDaddy, Apple, Google’s Android store, and other infrastructure providers, too, have denied us service, all because we refuse to censor user-generated content that is within the boundaries of the law.” Looking at this move by Mozilla, many users felt that this actually contradicts their goal of making the web free and open for all. https://twitter.com/VerGreeneyes/status/1116216415734960134 https://twitter.com/ChicalinaT/status/1116101257494761473 A Hacker News user added, “While Facebook, Reddit, Twitter and now Mozilla may think they're doing a good thing by blocking what they consider hateful speech, it's just helping these people double down on thinking they're in the right. We should not be afraid of ideas. Speech != violence. Violence is violence. With platforms banning more and more offensive content and increasing the label of what is bannable, we're seeing a huge split in our world. People who could once agree to disagree now don't even want to share the same space with one another. It's all call out culture and it's terrible.” Many people think that this step is nothing but a step towards mass-censorship. “I see it as an active endorsement of filter funneling comments sections online, given that despite the operators of Dissenter having tried to make efforts to comply with the terms of service Mozilla have imposed for being listed in their gallery, were given an unclear rationale as to how having "broken" these terms, and no clue as to what they were supposed to do to have avoided doing so,” adds a Reddit user. Mozilla has not revoked the add-on’s signature, so Dissenter can be distributed while guaranteeing that the add-on is safe and can be updated automatically. Manual installation of the extension from Dissenter.com/download is also possible. Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 28414
article-image-inkscape-1-0-beta-is-available-for-testing
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

Inkscape 1.0 beta is available for testing

Fatema Patrawala
19 Sep 2019
4 min read
Last week, the team behind Inkscape project released the first beta version of the upcoming and much-awaited Inkscape 1.0. The team writes on the announcement page that, “after releasing two less visible alpha versions this year, in mid-January and mid-June (and one short-lived beta version), Inkscape is now ready for extensive testing and subsequent bug-fixing.” Most notable changes in Inkscape 1.0 New theme selection: In 'Edit > Preferences > User Interface > Theme', users can set a custom GTK3 theme for Inkscape. If the theme comes with a dark variant, activating the 'Use dark theme' checkbox will result in the dark variant being used. Then the new theme will be applied immediately. Origin in top left corner: Another significant change integrated was to set the origin of the document to the top left corner of the page. It coordinates that a user can see in the interface match the ones that are saved in the SVG data, and makes working in Inkscape more comfortable for people who are used to this standard behavior. Canvas rotation and mirroring: With Ctrl+Shift+Scroll wheel the drawing area can be rotated and viewed from different angles. The canvas can be flipped, to ensure that the drawing does not lean to one side, and looks good either way. Canvas alignment: When the option "Enable on-canvas alignment" is active in the "Align and Distribute" dialog, a new set of handles will appear on canvas. These handles can be used to align the selected objects relative to the area of the current selection. HiDPI screen: Inkscape now supports HiDPI screens. Controlling PowerStroke: The width of PowerStroke is controlled with pressure sensitive graphics tablet Fillet/chamfer LPE and (non-destructive) Boolean Operation LPE: This new LPE adds fillet and chamfer to paths. The Boolean Operations LPE finally makes non-destructive boolean operations available in Inkscape. New PNG export options: The export dialog has received several new options available when you expand the 'Advanced' section. Centerline tracing: A new, unified dialog for vectorizing raster graphics is now available from Path > Trace Bitmap. New Live Path Effect selection dialog: Live Path Effects received a major overhaul, with lots of improvements and new features. Faster Path operations and deselection of large number of paths Variable fonts support: If Inkscape is compiled with a Pango library version 1.41.1, then it will come with support for variable fonts. Complete extensions overhaul: Extensions can now have clickable links, images, a better layout with separators and indentation, multiline text fields, file chooser fields and more. Command line syntax changes: The Inkscape command line is now more powerful and flexible for the user and easier to enhance for the developer. Native support for macOS with a signed and notarized .dmg file: Inkscape is now a first-rate native macOS application, and no longer requires XQuartz to operate. Other important changes for users Custom Icon Sets Icon sets no longer consist of a single file containing all icons. Instead each icon is allocated it's own file. The directory structure must follow the standard structure for Gnome icons. As a side effect of a bug fix to the icon preview dialog, custom UI icon SVG files need to be updated to have their background color alpha channel set to 0 so that they display correctly. Third-party extensions Third-party extensions need to be updated to work with this version of Inkscape. Import/Export via UniConvertor dropped Extensions that previously used the UniConvertor library for saving/opening various file formats have been removed: Import formats that have been removed: Adobe Illustrator 8.0 and below (UC) (*.ai) Corel DRAW Compressed Exchange files (UC) (*.ccx) Corel DRAW 7-X4 files (UC) (*.cdr) [cdr imports, but this specific version?] Corel DRAW 7-13 template files (UC) (*.cdt) Computer Graphics Metafile files (UC) (*.cgm) Corel DRAW Presentation Exchange files (UC) (*.cmx) HP Graphics Language Plot file [AutoCAD] (UC) (*.plt) sK1 vector graphics files (UC) (*.sk1) Export formats that have been removed: HP Graphics Language Plot file [AutoCAD] (UC) (*.plt) sK1 vector graphics files (UC) (*.sk1) Inline LaTeX formula conversion dropped The EQTeXSVG extension that could be used to convert an inline LaTeX equation into SVG paths using Python was dropped, due to its external dependencies. The team has asked to test Inkscape 1.0 beta version and report the findings on Inkscape report page. To know more about this news, check out official Inkscape announcement page. Other interesting news in web development this week! Mozilla announces final four candidates that will replace its IRC network Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript Google announces two new attribute links, Sponsored and UGC and updates “nofollow”
Read more
  • 0
  • 0
  • 28248

article-image-microsoft-edge-beta-is-now-ready-for-you-to-try
Bhagyashree R
21 Aug 2019
3 min read
Save for later

Microsoft Edge Beta is now ready for you to try

Bhagyashree R
21 Aug 2019
3 min read
Yesterday, Microsoft announced the launch of the first beta builds of its Chromium-based Edge browser. Developers using the supported versions of Windows and macOS can now test it and share their feedback with the Microsoft Edge team. https://twitter.com/MicrosoftEdge/status/1163874455690645504 The preview builds are made available to developers and early adopters through three channels on the Microsoft Edge Insider site: Beta, Canary, and Developer. Earlier this year, Microsoft opened the Canary and Developer channel. Canary builds receive updates every night, while Developer builds are updated weekly. Beta is the third and final preview channel that is the most stable one among the three. It receives updates every six weeks, along with periodic minor updates for bug fixes and security. Source: Microsoft Edge Insider What’s new in Microsoft Edge Beta Microsoft Edge Beta comes with several options using which you can personalize your browsing experience. It supports the dark theme and offers 14 different languages to choose from. If you are not a fan of the default new tab page, you can customize it with its tab page customizations. There are currently three preset styles that you can switch between:  Focused, Inspirational and Informational. You can further customize and personalize Microsoft Edge through the different addons available on Microsoft Edge Insider Addons store or other Chromium-based web stores. Talking about the user privacy bit, this release brings support for tracking prevention. Enabling this feature will protect you from being tracked by websites that you don’t visit. You can choose from three levels of privacy, which are Basic, Balanced and Strict. Microsoft Edge Beta also comes with some of the commercial features that were announced at Build this year. Microsoft Search is now integrated into Bing that lets you search for OneDrive files directly from Bing search. There is support for Internet Explorer mode that brings Internet Explorer 11 compatibility directly into Microsoft Edge. It also supports Windows Defender Application Guard for isolating enterprise-defined untrusted sites. Along with this release, Microsoft also launched the Microsoft Edge Insider Bounty Program.  Under this program researchers who find any high-impact vulnerabilities in Dev and Beta channels will receive rewards up to US$30,000. Read Microsoft’s official announcement to know more in detail. Microsoft officially releases Microsoft Edge canary builds for macOS users Microsoft Edge mobile browser now shows warnings against fake news using NewsGuard Microsoft Edge Beta available on iOS with a breaking news alert, developer options and more
Read more
  • 0
  • 0
  • 27983
Modal Close icon
Modal Close icon