Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-new-updates-microsoft-azure-services-sql-server-mysql-postgresql
Sugandha Lahoti
09 Mar 2018
3 min read
Save for later

New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL

Sugandha Lahoti
09 Mar 2018
3 min read
Microsoft has announced multiple updates to its Microsoft Azure Cloud Platform today. These updates are meant to help companies migrate database workloads to its data centers and making it easier to run them in Azure. SQL Server customers can now try the preview for SQL Database Managed Instance, Azure Hybrid Benefit for SQL Server, and Azure Database Migration Service preview for Managed Instance. Additionally, Microsoft has also announced the preview for Apache Tomcat® support in Azure App Service and the general availability of Azure Database for MySQL and PostgreSQL in the coming weeks, making it easier to bring open source powered applications to Azure. Microsoft SQL Database Managed Instance Azure SQL Database Managed Instance allows seamless movement of any SQL Server application to Azure without application changes. Managed Instance offers full engine compatibility with existing SQL Server deployments including capabilities like SQLAgent, DBMail, and Change Data Capture, to name a few. Microsoft Azure Database Migration Service The Azure Database Migration Service is designed as an end-to-end solution to help customers moving databases from on-premises SQL Server instances to SQL Database Managed Instances. Microsoft Azure Hybrid Benefit program With the Azure Hybrid Benefit program customers can now move their on-premises SQL Server licenses with active Software Assurance to Managed Instance and soon the SQL Server Integration Services licenses to Azure Data Factory with upto 30% discounted pricing. Apache Tomcat® support in Microsoft Azure App Service Microsoft also announced a preview of built-in support for Apache Tomcat and OpenJDK 8 from Azure App Service. This will help Java developers easily deploy web applications and APIs to Azure’s market leading PaaS. Once deployed, customers can then extend it with the Azure SDK for Java to work with various Azure services such as Storage, Azure Database for MySQL, and Azure Database for PostgreSQL.  General availability of Azure database services for MySQL and PostgreSQL Azure Database Services for MySQL and PostgreSQL provide customers with fully managed homes for their open source databases in Microsoft’s cloud. These reduce a company's time spent in managing things like database scaling and patching. SQL Information Protection Preview SQL Information Protection lets organizations discover, classify, label and protect potentially sensitive data that's stored in a database management system, either in Microsoft's cloud or in an organization's datacenters. This service can be used with the Azure SQL Database service or with SQL Server on premises. More information about these updates is available on the Microsoft Azure blog.
Read more
  • 0
  • 0
  • 29963

article-image-apple-shares-tentative-goals-for-webkit-2020
Sugandha Lahoti
11 Nov 2019
3 min read
Save for later

Apple shares tentative goals for WebKit 2020

Sugandha Lahoti
11 Nov 2019
3 min read
Apple has released a list of tentative goals for WebKit in 2020 catering to WebKit users as well as Web, Native, and WebKit Developers. These features are tentative and there is no guarantee if these updates will ship at all. Before releasing new features, Apple looks at a number of factors that are arranged according to a plan or system. They look at developer interests and harmful aspects associated with a feature. Sometimes they also take feedback/suggestions from high-value websites. WebKit 2020 enhancements for WebKit users Primarily, WebKit is focused on improving performance as well as privacy and security. Some performance ideas suggested include Media query change handling, No sync IPC for cookies, Fast for-of iteration, Turbo DFG, Async gestures, Fast scrolling on macOS, Global GC, and Service Worker declarative routing. For privacy, Apple is focusing on improving Address ITP bypasses, logged in API, in-app browser privacy, and PCM with fraud prevention. They are also working on improving Authentication, Network Security, JavaScript Hardening, WebCore Hardening, and Sandbox Hardening. Improvements in WebKit 2020 for Web Developers For web platforms, the focus is on three qualities - Catchup, Innovation, and Quality. Apple is planning to bring improvements in Graphics and Animations (CSS overscroll-behavior, WebGL 2, Web Animations), Media (Media Session Standard MediaStream Recording, Picture-in-Picture API) and DOM, JavaScript, and Text. They are also looking to improve CSS Shadow Parts, Stylable pieces, JS builtin modules, Undo Web API and also work on WPT (Web Platform Tests). Changes suggested for Native Developers For Native Developers in the obsolete legacy WebKit, the following changes are suggested: WKWebView API needed for migration Fix cookie flakiness due to multiple process pools WKWebView APIs for Media Enhancements for WebKit Developers The focus is on improving Architecture health and service & tools. Changes suggested are: Define “intent to implement” style process Faster Builds (finish unified builds) Next-gen layout for line layout Regression Test Debt repayment IOSurface in Simulator EWS (Early Warning System) Improvements Buildbot 2.0 WebKit on GitHub as a project (year 1 of a multi-year project) On Hacker News, this topic was widely discussed with people pointing out what they want to see in WebKit. “Two WebKit goals I'd like to see for 2020: (1) Allow non-WebKit browsers on iOS (start outperforming your competition instead of merely banning your competition), and (2) Make iOS the best platform for powerful web apps instead of the worst, the leader instead of the spoiler.” Another pointed, “It would be great if SVG rendering, used for diagrams, was of equal quality to Firefox.” One said, “WebKit and the Safari browsers by extension should have full and proper support for Service Workers and PWAs on par with other browsers.” For a full list of updates, please see the WebKit Wiki page. Apple introduces Swift Numerics to support numerical computing in Swift Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications
Read more
  • 0
  • 0
  • 29761

article-image-win-kex-version-2-0-from-kali-linux
Matthew Emerick
18 Sep 2020
3 min read
Save for later

Win-KeX Version 2.0 from Kali Linux

Matthew Emerick
18 Sep 2020
3 min read
We have been humbled by the amazing response to our recent launch of Win-KeX. After its initial release, we asked ourselves if that is truly the limit of what we can achieve or could we pull off something incredible to mark the 25th anniversary of Hackers? What about “a second concurrent session as root”, “seamless desktop integration with Windows”, or – dare we dream – “sound”? With no further further ado, we are thrilled to present to you Win-KeX v2.0 with the following features: Win-KeX SL (Seamless Edition) – bye bye borders Sound support Multi-session support KeX sessions can be run as root Able to launch “kex” from anywhere – no more cd-ing into the Kali filesystem required Shared clipboard – cut and paste content between Kali and Windows apps The installation of Win-KeX is as easy as always: sudo apt upgrade && sudo apt install -y kali-win-kex (in a Kali WSL installation) Win-KeX now supports two dedicated modes: Win-KeX Window mode is the classic Win-KeX look and feel with one dedicated window for the Kali Linux desktop. To launch Win-KeX in Window mode with sound support, type: kex --win -s Win-KeX SL mode provides a seamless integration of Kali Linux into the Windows desktop with the Windows Start menu at the bottom and the Kali panel at the top of the screen. All applications are launched in their own windows sharing the same desktop as Windows applications. kex --sl --s To enable sound: Start Win-KeX with the --sound or -s command line parameter. We’ve been watching Blu-rays in Win-KeX SL without problems. Why you ask? Because – now we can ;-) Win-KeX now supports concurrent sessions Win-KeX as unprivileged user Win-KeX as root user Win-KeX SL Windows Firewall Both SL mode and sound support require access through the Windows Defender firewall. When prompted, tick “Public networks”. You can later go to the firewall settings and restrict the scope to the WSL network (usually 172.3x.xxx.0/20) Manpage Forgotten that lifesaving parameter? Try: kex --help for a quick overview, or consult the manual page for a detailed manual: man kex Big shout-out to the authors of the following components without which there would be no Win-KeX: Win-KeX Win is brought to you by TigerVNC Win-KeX SL utilizes VcXsr Windows X Server Sound support is achieved through the integration of PulseAudio. Further Information: More information can be found on our documentation site. We hope you enjoy Win-KeX as much as we do and we’d love to see you around in the Kali Forums
Read more
  • 0
  • 0
  • 29689

article-image-build-achatbot-with-microsoft-bot-framework
Kunal Chaudhari
27 Apr 2018
8 min read
Save for later

How to build a chatbot with Microsoft Bot framework

Kunal Chaudhari
27 Apr 2018
8 min read
The Microsoft Bot Framework is an increbible tool from Microsoft. It makes building chatbots easier and more accessible than ever. That means you can build awesome conversational chatbots for a range of platforms, including Facebook and Slack. In this tutorial, you'll learn how to build an FAQ chatbot using Microsoft Bot Framework and ASP.NET Core. This tutorial has been taken from .NET Core 2.0 By Example. Let's get started. You're chatbot that can respond to simple queries such as: How are you? Hello! Bye! This should provide a good foundation for you to go further and build more complex chatbots with the Microsoft Bot Framework, The more you train the Bot and the more questions you put in its knowledge base, the better it will be. If you're a UK based public sector organisation then ICS AI offer conversational AI solutions built to your needs. Their Microsoft based infrastructure runs chatbots augmented with AI to better serve general public enquiries. Build a basic FAQ Chabot with Microsoft Bot Framework First of all, we need to create a page that can be accessed anonymously, as this is frequently asked questions (FAQ ), and hence the user should not be required to be logged in to the system to access this page. To do so, let's create a new controller called FaqController in our LetsChat.csproj. It will be a very simple class with just one action called Index, which will display the FAQ page. The code is as follows: [AllowAnonymous] public class FaqController : Controller { // GET: Faq public ActionResult Index() { return this.View(); } } Notice that we have used the [AllowAnonymous] attribute, so that this controller can be accessed even if the user is not logged in. The corresponding .cshtml is also very simple. In the solution explorer, right-click on the Views folder under the LetsChat project and create a folder named Faq and then add an Index.cshtml file in that folder. The markup of the Index.cshtml would look like this: @{ ViewData["Title"] = "Let's Chat"; ViewData["UserName"] = "Guest"; if(User.Identity.IsAuthenticated) { ViewData["UserName"] = User.Identity.Name; } } <h1> Hello @ViewData["UserName"]! Welcome to FAQ page of Let's Chat </h1> <br /> Nothing much here apart from the welcome message. The message displays the username if the user is authenticated, else it displays Guest. Now, we need to integrate the Chatbot stuff on this page. To do so, let's browse http://qnamaker.ai. This is Microsoft's QnA (as in questions and answers) maker site which a free, easy-to-use, REST API and web-based service that trains artificial intelligence (AI) to respond to user questions in a more natural, conversational way. Compatible across development platforms, hosting services, and channels, QnA Maker is the only question and answer service with a graphical user interface—meaning you don’t need to be a developer to train, manage, and use it for a wide range of solutions. And that is what makes it incredibly easy to use. You would need to log in to this site with your Microsoft account (@microsoft/@live/@outlook). If you don't have one, you should create one and log in. On the very first login, the site would display a dialog seeking permission to access your email address and profile information. Click Yes and grant permission: You would then be presented with the service terms. Accept that as well. Then navigate to the Create New Service tab. A form will appear as shown here: The form is easy to fill in and provides the option to extract the question/answer pairs from a site or .tsv, .docx, .pdf, and .xlsx files. We don't have questions handy and so we will type them; so do not bother about these fields. Just enter the service name and click the Create button. The service should be created successfully and the knowledge base screen should be displayed. We will enter probable questions and answers in this knowledge base. If the user types a question that resembles the question in the knowledge base, it will respond with the answer in the knowledge base. Hence, the more questions and answers we type, the better it will perform. So, enter all the questions and answers that you wish to enter, test it in the local Chatbot setup, and, once you are happy with it, click on Publish. This would publish the knowledge bank and share the sample URL to make the HTTP request. Note it down in a notepad. It contains the knowledge base identifier guide, hostname, and subscription key. With this, our questions and answers are ready and deployed. We need to display a chat interface, pass the user-entered text to this service, and display the response from this service to the user in the chat user interface. To do so, we will make use of the Microsoft Bot Builder SDK for .NET and follow these steps: Download the Bot Application project template from http://aka.ms/bf-bc-vstemplate. Download the Bot Controller item template from http://aka.ms/bf-bc-vscontrollertemplate. Download the Bot Dialog item template from http://aka.ms/bf-bc-vsdialogtemplate. Next, identify the project template and item template directory for Visual Studio 2017. The project template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesProjectTemplatesVisual C# and the item template directory is located at %USERPROFILE%DocumentsVisual Studio 2017TemplatesItemTemplatesVisual C#. Copy the Bot Application project template to the project template directory. Copy the Bot Controller ZIP and Bot Dialog ZIP to the item template directory. In the solution explorer of the LetsChat project, right-click on the solution and add a new project. Under Visual C#, we should now start seeing a Bot Application template as shown here: Name the project FaqBot and click OK. A new project will be created in the solution, which looks similar to the MVC project template. Build the project, so that all the dependencies are resolved and packages are restored. If you run the project, it is already a working Bot, which can be tested by the Microsoft Bot Framework emulator. Download the BotFramework-Emulator setup executable from https://github.com/Microsoft/BotFramework-Emulator/releases/. Let's run the Bot project by hitting F5. It will display a page pointing to the default URL of http://localhost:3979. Now, open the Bot framework emulator and navigate to the preceding URL and append api/messages; to it, that is, browse to http://localhost:3979/api/messages and click Connect. On successful connection to the Bot, a chat-like interface will be displayed in which you can type the message. The following screenshot displays this step:   We have a working bot in place which just returns the text along with its length. We need to modify this bot, to pass the user input to our QnA Maker service and display the response returned from our service. To do so, we will need to check the code of MessagesController in the Controllers folder. We notice that it has just one method called Post, which checks the activity type, does specific processing for the activity type, creates a response, and returns it. The calculation happens in the Dialogs.RootDialog class, which is where we need to make the modification to wire up our QnA service. The modified code is shown here: private static string knowledgeBaseId = ConfigurationManager.AppSettings["KnowledgeBaseId"]; //// Knowledge base id of QnA Service. private static string qnamakerSubscriptionKey = ConfigurationManager.AppSettings["SubscriptionKey"]; ////Subscription key. private static string hostUrl = ConfigurationManager.AppSettings["HostUrl"]; private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<object> result) { var activity = await result as Activity; // return our reply to the user await context.PostAsync(this.GetAnswerFromService(activity.Text)); context.Wait(MessageReceivedAsync); } private string GetAnswerFromService(string inputText) { //// Build the QnA Service URI Uri qnamakerUriBase = new Uri(hostUrl); var builder = new UriBuilder($"{qnamakerUriBase}/knowledgebases /{knowledgeBaseId}/generateAnswer"); var postBody = $"{{"question": "{inputText}"}}"; //Add the subscription key header using (WebClient client = new WebClient()) { client.Headers.Add("Ocp-Apim-Subscription-Key", qnamakerSubscriptionKey); client.Headers.Add("Content-Type", "application/json"); try { var response = client.UploadString(builder.Uri, postBody); var json = JsonConvert.DeserializeObject<QnAResult> (response); return json?.answers?.FirstOrDefault().answer; } catch (Exception ex) { return ex.Message; } } } The code is pretty straightforward. First, we add the QnA Maker service subscription key, host URL, and knowledge base ID in the appSettings section of Web.config. Next, we read these app settings into static variables so that they are available always. Next, we modify the MessageReceivedAsync method of the dialog to pass the user input to the QnA service and return the response of the service back to the user. The QnAResult class can be seen from the source code. This can be tested in the emulator by typing in any of the questions that we have stored in our knowledge base, and we will get the appropriate response, as shown next: Our simple FAQ bot using the Microsoft Bot Framework and ASP.NET Core 2.0 is now ready! Read more about building chatbots: How to build a basic server side chatbot using Go
Read more
  • 0
  • 1
  • 29616

article-image-google-employees-quit-over-companys-continued-ai-ties-with-the-pentagon
Amey Varangaonkar
16 May 2018
2 min read
Save for later

Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon

Amey Varangaonkar
16 May 2018
2 min read
Raising ethical concerns over Google’s continued involvement in developing Artificial Intelligence for military and warfare purposes, about a dozen Google employees have reportedly resigned. Since inception, many Googlers have been against Project Maven - Google’s project with the Pentagon, regarding the supply of machine learning technologies for image recognition and object detection purposes in the military drones. Earlier in April, Google employees had signed a petition, urging Google CEO Sundar Pichai to dissociate themselves from the Department of Defence by pulling out of Project Maven. They were of the opinion that humans, not AI algorithms, should be responsible for the sensitive and potentially life-threatening military work, and Google should invest in the betterment of human lives, not in war. Google had reassured their employees that the technology would be used in a non-offensive manner, and that policies were in effect regarding the use of AI in military projects. However, the resigning employees are of the view that these policies were not being strictly followed. The employees also felt that Google were less transparent about communicating controversial business decisions and were not receptive of the employee feedback like before. One of the employees who has resigned said, “Over the last couple of months, I’ve been less and less impressed with Google’s response and the way our concerns are being listened to.” The resignation of the employees sheds a bad light on Google’s employee retention strategy, and their reputation as a whole. These resignations might encourage more employees to evaluate their position within the company, given the lack of grievance redressal from Google’s end. Surrounded by fierce competition, losing talent to their rivals should be the last thing on Google’s agenda right now, and it will be interesting to see what Google’s plan of action will be in this regard. On the other hand, rivals Microsoft and Amazon have also signed partnerships with the US government, offering the required infrastructure and services to improve the defence functionalities. While there has been no reports of protests by their employees, Google seem to have found themselves in a soup, on ethical and moral grounds. Google Employees Protest against the use of Artificial Intelligence in Military Google News’ AI revolution strikes balance between personalization and the bigger picture Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 29545

article-image-microsoft-announces-net-jupyter-notebooks
Savia Lobo
13 Nov 2019
3 min read
Save for later

Microsoft announces .NET Jupyter Notebooks

Savia Lobo
13 Nov 2019
3 min read
At Microsoft Ignite 2019, Microsoft announced that Jupyter Notebooks will now allow users to run .NET code with the new .NET Jupyter Notebooks. Try .NET has grown to support more interactive experiences across the web with runnable code snippets, interactive documentation generator for .NET Core with dotnet try global tool. The same codebase is taken to the next level by announcing C# and F# in Jupyter notebooks. What’s new in .NET Jupyter Notebook By default, the .NET notebook experience enables users to display useful information about an object in table format. .NET notebooks also by default, ship with several helper methods for writing HTML; from basic helpers that enable users to write out a string as HTML or output Javascript to more complex HTML with PocketView. .NET Notebooks are a perfect match for ML .NET .NET notebooks bring interesting options for ML.NET, like exploring and documenting model training experiments, data distribution exploration, data cleaning, plotting data charts, and learning. To leverage ML.NET in Jupyter notebooks, users can check out the blog post Using ML.NET in Jupyter notebooks with several online samples. Create charts using Xplot Charts are rendered using Xplot.Plotly. As soon as users import XPlot.Plotly namespace into their notebooks(using Xplot.Ploty;), they can begin creating rich data visualizations in .NET. Source: Microsoft.com .NET for Apache Spark With .NET for Apache Spark, .NET developers have two options for running .NET for Apache Spark queries in notebooks: Azure Synapse Analytics Notebooks and Azure HDInsight Spark + Jupyter Notebooks. Both the experiences allow developers to write and run quick ad-hoc queries in addition to developing complete, end-to-end big data scenarios, such as reading in data, transforming it, and visualizing it. To learn how to get started with .NET for Apache Spark, visit the GitHub repo. Many users are excited to try out the new .NET Jupyter Notebooks. A user on Hacker News commented, “This is great news. Jupyter has become my default tool for prototyping code. I keep trying other platforms that should theoretically have the same features, but I just find Jupyter much more pleasant to use.” Another user commented, “I love .NET and I love Jupyter. I don't know how well they will combine though. I feel like the lack of Pandas and flexible typing of Python will make it a lot less useful.” To know more about this announcement in detail, Scott Hanselman’s post on his website. Introducing Voila that turns your Jupyter notebooks to standalone web applications JupyterHub 1.0 releases with named servers, support for TLS encryption and more .NET Framework API Porting Project concludes with .NET Core 3.0
Read more
  • 0
  • 0
  • 29480
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-q-101-getting-know-basics-microsofts-new-quantum-computing-language
Sugandha Lahoti
14 Dec 2017
5 min read
Save for later

Q# 101: Getting to know the basics of Microsoft’s new quantum computing language

Sugandha Lahoti
14 Dec 2017
5 min read
A few days back we posted about the preview of Microsoft‘s development toolkit with a new quantum programming language, simulator, and supporting tools. The development kit contains the tools which allow developers to build their own quantum computing programs and experiments. A major component of the Quantum Development Kit preview is the Q# programming language. According to Microsoft “Q# is a domain-specific programming language used for expressing quantum algorithms. It is to be used for writing sub-programs that execute on an adjunct quantum processor, under the control of a classical host program and computer.” The Q# programming language is foundational for any developer of quantum software. It is deeply integrated with the Microsoft Visual Studio and hence programming quantum computers is easy for developers who are well-versed with Microsoft Visual Studio. An interesting feature of Q# is the fact that it supports a basic procedural model (read loops and if/then statements) for writing programs. The top-level constructs in Q# are user-defined types, operations, and functions. The Type models Q# provides several type models. There are the primitive types such as the Qubit type or the Pauli type. The Qubit type represents a quantum bit or qubit. A quantum computer stores information in the form of qubits as both 1s and 0s at the same time.  Qubits can either be tested for identity (equality) or passed to another operation. Actions on Qubits are implemented by calling operations in the Q# standard library. The Pauli type represents an element of the single-qubit Pauli group. The Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices. This type has four possible values: PauliI, PauliX, PauliY, and PauliZ. There are also array and tuple types for creating new, structured types. It is possible to create arrays of tuples, tuples of arrays, tuples of sub-tuples, etc. Tuple instances are immutable i.e. the contents of a tuple can’t be changed once created. Q# does not include support for rectangular multi-dimensional arrays. Q# also has User-defined types. User-defined types may be used anywhere. It is possible to define an array of a user-defined type and to include a user-defined type as an element of a tuple type. newtype TypeA = (Int, TypeB); newtype TypeB = (Double, TypeC); newtype TypeC = (TypeA, Range); Operations and Functions A Q# operation is a quantum subroutine, which means it is a callable routine that contains quantum operations. A Q# function is the traditional subroutine used within a quantum algorithm. It has no quantum operations. You may pass operations or qubits to Functions for processing. However, they can’t allocate or borrow qubits or call operations. Operations and functions are together known as callables. A functor in Q# is a factory that specifies a new operation from another operation. An important feature of the function is the fact, that they have access to the implementation of the base operation when defining the implementation of the new operation. This means that functors can perform more complex functions than classical complex functions. Comments Comments begin with two forward slashes, //, and continue until the end of line.  A comment may appear anywhere in a Q# source file, including where statements are not valid.  However, end of line comments in the middle of an expression is not supported, although the expression can be multi-lined. Comments can also begin with three forward slashes, ///. Their contents are considered as documentation for the defined callable or user-defined type when they appear immediately before an operation, function, or type definition. Namespaces Q# follows the same rules for namespace as other .NET languages. Every Q# operation, function, and user-defined type is defined within a namespace.  However, Q# does not support nested namespaces. Control Flow The control flow consists of For-Loop, Repeat-Until-Success Loop, Return statement, and the Conditional statement. For-Loop Like the traditional for loop, Q# uses the for statement for iteration through an integer range. The statement consists of the keyword for, followed by an identifier, the keyword in, a Range expression, and a statement block. for (index in 0 .. n-2) { set results[index] = Measure([PauliX], [qubits[index]]); } Repeat-until-success Loop The repeat statement supports the quantum “repeat until success” pattern. It consists of the keyword repeat, followed by a statement block (the loop body), the keyword until, a Boolean expression, the keyword fixup, and another statement block (the fixup). using ancilla = Qubit[1] {    repeat {        let anc = ancilla[0];        H(anc);        T(anc);        CNOT(target,anc);        H(anc);        let result = M([anc],[PauliZ]);    } until result == Zero    fixup {        ();    } }  The Conditional statement Similar to the if-then conditional statement in most programming languages, the if statement in Q# supports conditional execution. It consists of the keyword if, followed by a Boolean expression and a statement block (the then block). This may be followed by any number of else-if clauses, each of which consists of the keyword elif, followed by a Boolean expression and a statement block (the else-if block). if (result == One) {    X(target); } else {    Z(target); }  Return Statement The return statement ends execution of an operation or function and returns a value to the caller.  It consists of the keyword return, followed by an expression of the appropriate type, and a terminating semicolon. return 1; OR return (); OR return (results, qubits); File Structure A Q# file consists of one or more namespace declarations. Each namespace declaration contains definitions for user-defined types, operations, and functions. You can download the Quantum Development Kit here. You can learn more about the features of the Q# language here.
Read more
  • 0
  • 0
  • 29270

article-image-llvm-8-0-0-releases
Natasha Mathur
22 Mar 2019
3 min read
Save for later

LLVM 8.0.0 releases!

Natasha Mathur
22 Mar 2019
3 min read
LLVM team released LLVM 8.0, earlier this week. LLVM is a collection of tools that help develop compiler front ends and back ends. LLVM is written in C++ and has been designed for compile-time, link-time, run-time, and "idle-time" optimization of programs that are written in arbitrary programming languages. LLVM 8.0 explores known issues, major improvements and other changes in the subprojects of LLVM. There were certain issues in LLVM 8.0.0 that could not be fixed earlier (before this release). For instance, clang is getting miscompiled by trunk GCC, and “asan-dynamic” is not able to work on FreeBSD. Other than the issues, there is a long list of changes that have been made to LLVM 8.0.0. Non-comprehensive changes to LLVM 8.0.0 llvm-cov tool can export lcov trace files with the help of the -format=lcov option of the export command. The add_llvm_loadable_module CMake macro has been deprecated. The add_llvm_library macro with the MODULE argument can now help provide the same functionality. For MinGW, references to data variables that are to be imported from a dll can be now accessed via a stub. This will further allow the linker to convert it to a dllimport if needed. Support has been added for labels as offsets in .reloc directive. Windows support for libFuzzer (x86_64) has also been added. Other Changes LLVM IR:  The Function attribute named speculative_load_hardening has been introduced. This will indicate that Speculative Load Hardening should be enabled for the function body. JIT APIs: ORC (On Request Compilation) JIT APIs will now support concurrent compilation. The existing (non-concurrent) ORC layer classes, as well as the related APIs, have been deprecated. These have been renamed with a “Legacy” prefix (e.g. LegacyIRCompileLayer). All the deprecated classes will be removed in LLVM 9. AArch64 Target: Support has been added for Speculative Load Hardening. Also, initial support added for the Tiny code model, where code and the statically defined symbols should remain within 1MB. MIPS Target: Support forGlobalISel instruction selection framework has been improved. ORC JIT will now offer support for MIPS and MIPS64 architectures. There’s also newly added support for MIPS N32 AB. PowerPC Target: This has now been switched to non-PIC default in LLVM 8.0.0. Darwin support has also been deprecated. Also, Out-of-Order scheduling has been enabled for P9. SystemZ Target: These include various code-gen improvements related to improved auto-vectorization, inlining, as well as the instruction scheduling. Other than these, changes have also been made to X86 target, WebAssembly Target, Nios2 target, and LLDB. For a complete list of changes, check out the official LLVM 8.0.0 release notes. LLVM 7.0.0 released with improved optimization and new tools for monitoring LLVM will be relicensing under Apache 2.0 start of next year LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 29175

article-image-microsoft-ignite-2018-highlights-from-day-1
Savia Lobo
25 Sep 2018
7 min read
Save for later

Microsoft Ignite 2018: Highlights from day 1

Savia Lobo
25 Sep 2018
7 min read
Microsoft Ignite 2018 got started yesterday on the 24th of September 2018 in, Orlando, Florida. The event will run until the 28th of September 2018 and will host more than 26,000 Microsoft developers from more than 100 countries. Day 1 of Microsoft Ignite was full of exciting news and announcements including Microsoft Authenticator, AI-enabled updates to Microsoft 365, and much more! Let’s take a look at some of the most important announcements from Orlando. Microsoft puts an end to passwords via its Microsoft Authenticator app Microsoft security helps protect hundreds of thousands of line-of-business and SaaS apps as they connect to Azure AD. They plan to deliver new support for password-less sign-in to Azure AD-connected apps via Microsoft Authenticator. The Microsoft Authenticator app replaces your password with a more secure multi-factor sign-in that combines your phone and your fingerprint, face, or PIN. Using a multi-factor sign-in method, users can reduce compromise by 99.9%. Not only is it more secure, but it also improves user experience by eliminating passwords. The age of the password might be reaching its end thanks to Microsoft. Azure IoT Central is now generally available Microsoft announced the public preview of Azure IoT Central in December 2017. At Ignite yesterday, Azure made its IoT Central generally available. Azure IoT Central is a fully managed software-as-a-service (SaaS) offering, which enables customers and partners to provision an IoT solution in seconds. Users can customize it in just a few hours, and go to production the same day—all without requiring any cloud solution development expertise. Azure IoT Central is built on the hyperscale and enterprise-grade services provided by Azure IoT. In theory, it should match the security and scalability needs of Azure users. Microsoft has also collaborated with MultiTech, a leading provider of communications hardware for the Internet of Things, to integrate IoT Central functionality into the MultiConnect Conduit programmable gateway. This integration enables out-of-the-box connectivity from Modbus-connected equipment directly into IoT Central for unparalleled simplicity from proof of concept through wide-scale deployments. To know more about Azure IoT central, visit its blog. Microsoft Azure introduces Azure Digital Twins, the next evolution in IoT Azure Digital Twins allows customers and partners to create a comprehensive digital model of any physical environment, including people, places, and things, as well as the relationships and processes that bind them. Azure Digital Twins uses Azure IoT Hub to connect the IoT devices and sensors that keep the digital model up to date with the physical world. This will enable two powerful capabilities: Users can respond to changes in the digital model in an event-driven and serverless way to implement business logic and workflows for the physical environment. For instance, in a conference room when a presentation is started in PowerPoint, the environment could automatically dim the lights and lower the blinds. After the meeting, when everyone has left, the lights are turned off and the air conditioning is lowered. Azure Digital Twins also integrates seamlessly with Azure data and analytics services, enabling users to track the past and predict the future of their digital model. Azure Digital Twins will be available for preview on October 15 with additional capabilities. To know more, visit its webpage. Azure Sphere, a solution for creating highly secure MCU devices In order to help organizations seize connected device opportunities while meeting the challenge of IoT risks, Microsoft developed Azure Sphere, a solution for creating highly secure MCU devices. At Ignite 2018, Microsoft announced that Azure Sphere development kits are universally available and that the Azure Sphere OS, Azure Sphere Security Service, and Visual Studio development tools have entered public preview. Together, these tools provide everything needed to start prototyping new products and experiences with Azure Sphere. Azure Sphere allows manufacturers to build highly secure, internet-enabled MCU devices that stay protected even in an evolving threat landscape. Azure Sphere’s unique mix of three components works in unison to reduce risk, no matter how the threats facing organizations change: The Azure Sphere MCU includes built-in hardware-based security. The purpose-built Azure Sphere OS adds a four-layer defense, in-depth software environment. The Azure Sphere Security Service renews security to protect against new and emerging threats. Adobe, Microsoft, and SAP announced the Open Data Initiative At the Ignite conference, the CEOs of Adobe, Microsoft, and SAP introduced an Open Data Initiative to help companies connect, understand and use all their data to create amazing experiences for their customers with AI. Together, the three long-standing partners are reimagining customer experience management (CXM) by empowering companies to derive more value from their data and deliver world-class customer experiences in real-time. The Open Data Initiative is based on three guiding principles: Every organization owns and maintains complete, direct control of all their data. Customers can enable AI-driven business processes to derive insights and intelligence from unified behavioral and operational data. A broad partner ecosystem should be able to easily leverage an open and extensible data model to extend the solution. Microsoft now lets businesses rent a virtual Windows 10 desktop in Azure Until now, virtual Windows 10 desktops were the domain of third-party service providers. However, from now on, Microsoft itself will offer these desktops. The company argues that this is the first time users will get a multiuser virtualized Windows 10 desktop in the cloud. Most of the employees don’t necessarily always work from the same desktop or laptop. This virtualized solution will allow organizations to offer them a full Windows 10 desktop in the cloud, with all the Office apps they know, without the cost of having to provision and manage a physical machine. A universal search feature across Bing and Office.com Microsoft announced that it is rolling out a universal search feature across Bing and Office.com. The Search feature will be later supported in Edge, Windows, and Office. The Search feature will be able to index internal documents to make it easier to find files. Search is going to be moved to a prominent and consistent place across the apps that are used every day whether it is Outlook, PowerPoint, Excel, Teams, etc.  Also, personalized results will appear in the search box so that users can see documents that they worked on recently. Here’s a small video to know more about the universal search feature. https://youtu.be/mtjJdltMoWU New AutoML capabilities in Azure Machine Learning service Microsoft also announced new capabilities for its Azure Machine Learning service, a technology that allows anyone to build and train machine learning models to make predictions from data. These models can then be deployed anywhere – in the cloud, on-premises or at the edge.  At the center of the update is automated machine learning, an AI capability that automatically selects, tests and tweaks machine learning models that power many of today’s AI systems. The capability is aimed at making AI development more accessible to a broader set of customers. Preview announcement of SQL Server 2019 Microsoft announced the first public preview of SQL Server 2019 at Ignite 2018. This new release of SQL Server, businesses will be able to manage their relational and non-relational data workloads in a single database management system. Few expectations at the SQL Server 2019 include: Microsoft SQL Server 2019 will run either on-premise or on the Microsoft Azure stack Microsoft announced the Azure SQL Database Managed Instance, which will allow businesses to port their database to the cloud without any code changes Microsoft announced new database connectors that will allow organizations to integrate SQL Server with other databases such as Oracle, Cosmos DB, MongoDB, and Teradata To know more about SQL Server 2019, read, ‘Microsoft announces the first public preview of SQL Server 2019 at Ignite 2018’ Microsoft Ignite 2018: New Azure announcements you need to know Azure Functions 2.0 launches with better workload support for serverless Microsoft, Adobe and SAP announce Open Data Initiative, a joint vision to reimagine customer experience, at Ignite 2018  
Read more
  • 0
  • 0
  • 28930

article-image-deepminds-ai-uses-reinforcement-learning-to-defeat-humans-in-multiplayer-games
Savia Lobo
03 Jun 2019
3 min read
Save for later

DeepMind's AI uses reinforcement learning to defeat humans in multiplayer games

Savia Lobo
03 Jun 2019
3 min read
Recently, researchers from DeepMind released their research where they designed AI agents that can team up to play Quake III Arena’s Capture the Flag mode. The highlight of this research is, these agents were able to team up against human players or play alongside them, tailoring their behavior accordingly. We have previously seen instances of an AI agent beating humans in video games like StarCraft II and Dota 2. However, these games did not involve agents playing in a complex environment or required teamwork and interaction between multiple players. In their research paper titled, “Human-level performance in 3D multiplayer games with population-based reinforcement learning”, a group of 30 AIs were collectively trained to play five-minute rounds of Capture the Flag, a game mode in which teams must retrieve flags from their opponents while retaining their own. https://youtu.be/OjVxXyp7Bxw While playing the rounds in Capture the Flag the DeepMind AI was able to outperform human teammates, with the reaction time slowed down to that of a typical human player. Rather than a number of AIs teaming up on a group of human players in a game of Dota 2, the AI was able to play alongside them as well. Using Reinforcement learning, the AI taught itself the skill which helped it to pick up the rules of the game over thousands of matches in randomly generated environments. “No one has told [the AI] how to play the game — only if they’ve beaten their opponent or not. The beauty of using [an] approach like this is that you never know what kind of behaviors will emerge as the agents learn,” said Max Jaderberg, a research scientist at DeepMind who recently worked on AlphaStar, a machine learning system that recently bested a human team of professionals at StarCraft II. Greg Brockman, a researcher at OpenAI told The New York Times, “Games have always been a benchmark for A.I. If you can’t solve games, you can’t expect to solve anything else.” According to The New York Times, “such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic.” Talking about limitations, the researchers say, “Limitations of the current framework, which should be addressed in future work, include the difficulty of maintaining diversity in agent populations, the greedy nature of the meta-optimization performed by PBT, and the variance from temporal credit assignment in the proposed RL updates.” “Our work combines techniques to train agents that can achieve human-level performance at previously insurmountable tasks. When trained in a sufficiently rich multiagent world, complex and surprising high-level intelligent artificial behavior emerged”, the paper states. To know more about this news in detail, visit the official research paper on Science. OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers Samsung AI lab researchers present a system that can animate heads with one-shot learning Amazon is reportedly building a video game streaming service, says Information  
Read more
  • 0
  • 0
  • 28925
article-image-yubico-reveals-biometric-yubikey-at-microsoft-ignite
Fatema Patrawala
07 Nov 2019
4 min read
Save for later

Yubico reveals Biometric YubiKey at Microsoft Ignite

Fatema Patrawala
07 Nov 2019
4 min read
On Tuesday, at the ongoing Microsoft Ignite, Yubico, the leading provider of authentication and encryption hardware, announced the long-awaited YubiKey Bio. YubiKey Bio is the first YubiKey to support fingerprint recognition for secure and seamless passwordless logins. As per the team this feature has been a top requested feature from many of their YubiKey users. Key features in YubiKey Bio The YubiKey Bio delivers the convenience of biometric login with the added benefits of Yubico’s hallmark security, reliability and durability assurances. Biometric fingerprint credentials are stored in the secure element that helps protect them against physical attacks. As a result, a single, trusted hardware-backed root of trust delivers a seamless login experience across different devices, operating systems, and applications. With support for both biometric- and PIN-based login, the YubiKey Bio leverages the full range of multi-factor authentication (MFA) capabilities outlined in the FIDO2 and WebAuthn standard specifications. In keeping with Yubico’s design philosophy, the YubiKey Bio will not require any batteries, drivers, or associated software. The key seamlessly integrates with the native biometric enrollment and management features supported in the latest versions of Windows 10 and Azure Active Directory, making it quick and convenient for users to adopt a phishing-resistant passwordless login flow. “As a result of close collaboration between our engineering teams, Yubico is bringing strong hardware-backed biometric authentication to market to provide a seamless experience for our customers,” said Joy Chik, Corporate VP of Identity, Microsoft. “This new innovation will help drive adoption of safer passwordless sign-in so everyone can be more secure and productive.” The Yubico team has worked with Microsoft in the past few years to help drive the future of passwordless authentication through the creation of the FIDO2 and WebAuthn open authentication standards. Additionally they have built YubiKey integrations with the full suite of Microsoft products including Windows 10 with Azure Active Directory and Microsoft Edge with Microsoft Accounts. Microsoft Ignite attendees saw a live demo of passwordless sign-in to Microsoft Azure Active Directory accounts using the YubiKey Bio. The team also promises that by early next year, enterprise users will be able to authenticate to on-premises Active Directory integrated applications and resources. And provide seamless Single Sign-On (SSO) to cloud- and SAML-based applications. To take advantage of strong YubiKey authentication in Azure Active Directory environments, users can refer to this page for more information. On Hacker News, this news has received mixed reactions while some are in favour of the biometric authentication, others believe that keeping stronger passwords is still a better choice. One of them commented, “1) This is an upgrade to the touch sensitive button that's on all YubiKeys today. The reason you have to touch the key is so that if an attacker gains access to your computer with an attached Yubikey, they will not be able to use it (it requires physical presence). Now that touch sensitive button becomes a fingerprint reader, so it can't be activated by just anyone. 2) The computer/OS doesn't have to support anything for this added feature.” Another user responds, “A fingerprint is only going to stop a very opportunistic attacker. Someone who already has your desktop and app password and physical access to your desktop can probably get a fingerprint off a glass, cup or something else. I don't think this product is as useful as it seems at first glance. Using stronger passwords is probably just as safe.” Google updates biometric authentication for Android P, introduces BiometricPrompt API GitHub now supports two-factor authentication with security keys using the WebAuthn API You can now use fingerprint or screen lock instead of passwords when visiting certain Google services thanks to FIDO2 based authentication Microsoft and Cisco propose ideas for a Biometric privacy law after the state of Illinois passed one SafeMessage: An AI-based biometric authentication solution for messaging platforms
Read more
  • 0
  • 0
  • 28682

article-image-unity-switches-to-webassembly-as-the-output-format-for-the-unity-webgl-build-target
Sugandha Lahoti
16 Aug 2018
2 min read
Save for later

Unity switches to WebAssembly as the output format for the Unity WebGL build target

Sugandha Lahoti
16 Aug 2018
2 min read
With the launch of Unity 2018.2 release last month, Unity is finally making the switch to WebAssembly as their output format for the Unity WebGL build target. WebAssembly support was first teased in Unity 5.6 as an experimental feature. Unity 2018.1 marked the removal of the experimental label. And finally in 2018.2, Web Assembly replaces asm.js as the default linker target. Source: Unity Blog WebAssembly replaced asm.js because it is faster, smaller and more memory-efficient, which are all pain points of the Unity WebGL export. A WebAssembly file is a binary file (which is a more compact way to deliver code), as opposed to asm.js, which is text. In addition, code modules that have already been compiled can be stored into an IndexedDB cache, resulting in a really fast startup when reloading the same content. In WebAssembly, the code size for an empty project is ~12% smaller or ~18% if 3D physics is included. Source: Unity Blog WebAssembly also has its own instruction set. In Development builds, it adds more precise error-detection in arithmetic operations. In non-development builds, this kind of detection of arithmetic errors is masked, so the user experience is not affected. Asm.js added a restriction on the size of the Unity Heap; its size had to be specified at build-time and could never change. WebAssembly enables the Unity Heap size to grow at runtime, which lets Unity content memory-usage exceed the initial heap size. Unity is now working on multi-threading support, which will initially be released as an experimental feature and will be limited to internal native threads (no C# threads yet). Debugging hasn’t got any better. While browsers have begun to provide WebAssembly debugging in their devtools suites, these debuggers do not yet scale well to Unity3D sizes of content. What’s next to come Unity is still working on new features and optimizations to improve startup times and performance: Asynchronous instantiation Structured cloning, which allows compiled WebAssembly to be cached in the browser Baseline and tiered compilation, to speed-up instantiation Streaming instantiation to compile Assembly code while downloading it Multi-Threading You can read the full details on the Unity Blog. Unity 2018.2: Unity release for this year second time in a row! GitHub for Unity 1.0 is here with Git LFS and file locking support What you should know about Unity 2018 Interface
Read more
  • 0
  • 0
  • 28646

article-image-google-news-ai-revolution-strikes-balance-between-personalization-and-bigger-picture
Richard Gall
10 May 2018
4 min read
Save for later

Google News' AI revolution strikes balance between personalization and the bigger picture

Richard Gall
10 May 2018
4 min read
Google has launched a major revamp to its news feature at Google I/O 2018. 15 years after its launch, Google News is to offer more personalization with the help of AI. Perhaps that's surprising - surely Google has always been using AI across every feature? Well yes, to some extent. But this update brings artificial intelligence fully into the fold. It may feel strange talking about AI and news at the moment. Concern over 'echo chambers' and 'fake news' has become particularly pronounced recently. The Facebook and Cambridge Analytica scandal have thrown the spotlight on the relationship between platforms, publishers, and our data. That might explain why Google seems to be trying to counter balance the move towards greater personalization with a new feature called Full Coverage. Full Coverage has been designed by Google as a means to tackle current concerns around 'echo chambers' and polarization in discourse. Such a move highlights a greater awareness of the impact the platform can have on politics and society. It suggests by using AI in context, there's a way to get the balance right. "In order to make it easier to keep up and make sense of [today's constant flow of news and information from different sources and media][, we set out to bring our news products into one unified experience", explained Trystan Uphill in a blog post. Personalizing Google News with AI By making use of advanced machine learning and AI techniques, Google will now offer you a more personalized way to read the news. With a new 'For You' tab, Google will organize a feed of news based on everything that the search engine knows about you. This will be based on a range of things, from your browsing habits to your location. "The more you use the app, the better the app gets" Upstill explains. In a new feature called 'Newscasts' Google News will make use of natural language processing techniques to bring together wide range of sources on a single topic. It seems strange to think that Google wasn't doing this before, but in actual fact it says a lot about how the platform dictates how we understand the scope of a debate or the way a news cycle is reported and presented. With newscasts it should be easier to illustrate the sheer range of voices currently out there. Fundamentally, Google News is making its news feature smarter - where previously it relied upon keywords, there is an added dimension whereby Google's AI algorithms become much more adept at understanding how different news stories evolve, and how different things relate to one another. https://www.youtube.com/watch?v=wArETCVkS4g Tackling the impact of personalization With Full Coverage, Google News will provide a range of perspectives on a given news story. This seems to be a move to directly challenge the increased concern around online 'echo chambers.' Here's what Upstill says: "Having a productive conversation or debate requires everyone to have access to the same information. That’s why content in Full Coverage is the same for everyone—it’s an unpersonalized view of events from a range of trusted news sources." Essentially, it's about ensuring people have access to a broad overview of stories. Of course, Google is here acting a lot like a publisher or curator of news - even when giving a broad picture around a news story there still will be an element of editorializing (whether that's human or algorithmic). However, it nevertheless demonstrates that Google has some awareness of the issues around online discourse and how its artificial intelligence systems can lead to a certain degree of polarization. It's now easier to subscribe and follow your favourite news sources The evolution of digital publishing has seen the rise of subscription models for many publishers. But that hasn't always been that well-aligned for readers searching Google. However, it will now be easier to read and follow your favorite news sources on Google News. Not only will you now be able to subscribe to news sources through your Google account, you'll also be able to see paywalled content your subscribed to in your Google News feeds. That will certainly be a better reading experience. In turn, that means Google is helping to cement themselves as the go-to place for news. Of course, Google could hardly be said to be under threat. But as native applications and social media platforms have come to define the news experience for many readers in recent years, this is a way of Google staking a claim in an area in which it may be ever so slightly vulnerable.
Read more
  • 0
  • 0
  • 28581
article-image-following-epic-games-ubisoft-joins-blender-development-fund-adopts-blender-as-its-main-dcc-tool
Vincy Davis
23 Jul 2019
5 min read
Save for later

Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool

Vincy Davis
23 Jul 2019
5 min read
Yesterday, Ubisoft Animation Studio (UAS) announced that they will fund the development of Blender as a corporate Gold member through the Blender Foundation’s Development Fund. It has also been announced that Ubisoft will be adopting the open-source animation software Blender as their main digital content creation (DCC) tool. The exact funding amount has not been disclosed. Gold corporate members of the Blender development fund can have their prominent logo on blender.org dev fund page and have credit as Corporate Gold Member in blender.org and in official Blender foundation communication. The Gold corporate members also have a strong voice in approving projects for Blender. The Gold corporate members donate a minimum of EUR 30,000 as long as they remain a member. Pierrot Jacquet, Head of Production at UAS mentioned in the press release , “Blender was, for us, an obvious choice considering our big move: it is supported by a strong and engaged community, and is paired up with the vision carried by the Blender Foundation, making it one of the most rapidly evolving DCCs on the market.”  He also believes that since Blender is an open source project, it will allow Ubisoft to share some of their own developed tools with the community. “We love the idea that this mutual exchange between the foundation, the community, and our studio will benefit everyone in the end”, he adds. As part of their new workflow, Ubisoft is creating a development environment supported by open source and inner source solutions. The Blender software will replace Ubisoft’s in-house digital content creation tool and will be used to produce short content with the incubator. Later, the Blender software will also be used in Ubisoft’s upcoming shows in 2020. Per Jacquet, Blender 2.8 will be a “game-changer for the CGI industry”. Blender 2.8 beta is already out, and its stable version is expected to be released in the coming days. Ubisoft was impressed with the growth of the internal Blender community as well as with the innovations expected in Blender 2.8. Blender 2.8 will have a revamped UX, Grease Pencil, EEVEE real-time rendering, new 3D viewport and UV editor tools to enhance users gaming experience. Ubisoft was thus convinced that this is the “right time to bring support to our artists and productions that would like to add Blender to their toolkit.” This news comes a week after Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. With two big companies funding Blender, the future does look bright for them. The Blender 2.8 preview features is expected to have made both the companies step forward and support Blender, as both Epic and Ubisoft have announced their funding just days before the stable release of Blender 2.8. In addition to Epic and Ubisoft, corporate members include animation studio Tangent, Valve, Intel, Google, and Canonical's Ubuntu Linux distribution. Ton Roosendaal, founder and chairman of Blender Foundation is surely a happy man when he says that “Good news keeps coming”. He added, “it’s such a miracle to witness the industry jumping on board with us! I’ve always admired Ubisoft, as one of the leading games and media producers in the world. I look forward to working with them and help them find their ways as a contributor to our open source projects on blender.org.” https://twitter.com/tonroosendaal/status/1153376866604113920 Users are very happy and feel that this is a big step forward for Blender. https://twitter.com/nazzagnl/status/1153339812105064449 https://twitter.com/Nahuel_Belich/status/1153302101142978560 https://twitter.com/DJ_Link/status/1153300555986550785 https://twitter.com/cgmastersnet/status/1153438318547406849 Many also see this move as the industry’s way of sidelining Autodesk, the company which is popularly used for its DCC tools. https://twitter.com/flarb/status/1153393732261072897 A Hacker News user comments, “Kudos to blender's marketing team. They get a bit of free money from this. But the true motive for Epic and Unisoft is likely an attempt to strong-arm Autodesk into providing better support and maintenance. Dissatisfaction with Autodesk, lack of care for their DCC tools has been growing for a very long time now, but studios also have a huge investment into these tools as part of their proprietary pipelines.  Expect Autodesk to kowtow soon and make sure that none of these companies will make the switch. If it means that Autodesk actually delivers bug fixes for the version the customer has instead of one or two releases down the road, it is a good outcome for the studios.” Visit the Ubisoft website for more details. CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers What to expect in Unreal Engine 4.23? Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker
Read more
  • 0
  • 0
  • 28434

article-image-mozilla-and-google-chrome-refuse-to-support-gabs-dissenter-extension-for-violating-acceptable-use-policy
Bhagyashree R
12 Apr 2019
5 min read
Save for later

Mozilla and Google Chrome refuse to support Gab’s Dissenter extension for violating acceptable use policy

Bhagyashree R
12 Apr 2019
5 min read
Earlier this year, Gab, the “free speech” social network and a popular forum for far-right viewpoint holders and other fringe groups, launched a browser extension named Dissenter that creates an alternative comment section for any website. The plug-in is now removed from the extension stores of both Mozilla and Google, as the extension violates their acceptable use policy. This decision comes after Columbia Journalism Review reported about the extension to the tech giants. https://twitter.com/nausjcaa/status/1116409587446484994 The Dissenter plug-in, which goes by the tagline “the comment section of the internet”, allows users to discuss any topic in real-time without fearing that their posted comment will be removed by a moderator. The plug-in failed to pass the review process of Mozilla and is now disabled for Firefox users. But, the users who have already installed the plug-in can continue to use it. The Gab team took to Twitter complaining about Mozilla’s Acceptable Use Policy. https://twitter.com/getongab/status/1116036111296544768 When asked for more clarity on which policies Dissenter did not comply with, Mozilla said that they received abuse reports for this extension. It further added that the platform is being used for promoting violence, hate speech, and discrimination, but they failed to show any examples to add any credibility to their claims. https://twitter.com/getongab/status/1116088926559666181 The extension developers responded by saying that they do moderate any illegal conduct or posts happening on their platform as and when they are brought to their attention. “We do not display content containing words from a list of the most offensive racial epithets in the English language,” added the Gab developers. Soon after this, Google Chrome also removed the extension from Chrome Extension Store stating the same reason that the extension does not comply with their policies. After getting deplatformed, the Dissenter team has come to the conclusion that the best way forward is to create their own browser. They are thinking of forking Chromium or the privacy-focused web browser, Brave. “That’s it. We are going to fork Chromium and create a browser with Dissenter, ad blocking, and other privacy tools built in along with the guarantee of free speech that Silicon Valley does not provide.” https://twitter.com/getongab/status/1116308126461046784 Gab does not moderate views posted by its users until they are flagged for any violations and says it “treats its users as adults”. So, until people are complaining, the platform will not take any appropriate action against the threats and hate speech posted in the comments. Though it is known for its tolerance for fringe views and has drawn immense heat from the public, things took turn for the worse after the recent Christchurch shooting. A far-right extremist who shot dead 20+ Muslims and left 30 others injured in two Mosques in New Zealand, had shared his extremist manifesto on social media sites like Gab and 8chan. He had also live-streamed the shooting on Facebook, Youtube, and others. This is not the first time when Gab has been involved in a controversy. Back in October last year, PayPal banned Gab following the anti-Semitic mass shooting in Pittsburgh. It was reported that the shooter was an active poster on the Gab website and has hinted his intentions shortly before the attack. In the same month, hosting provider Joyent also suspended its services for Gab. The platform has also been warned by Stripe for the violations of their policies. Torba, the co-founder of Gab, said, “Payments companies like Paypal, Stripe, Square, Cash App, Coinbase, and Bitpay have all booted us off. Microsoft Azure, Joyent, GoDaddy, Apple, Google’s Android store, and other infrastructure providers, too, have denied us service, all because we refuse to censor user-generated content that is within the boundaries of the law.” Looking at this move by Mozilla, many users felt that this actually contradicts their goal of making the web free and open for all. https://twitter.com/VerGreeneyes/status/1116216415734960134 https://twitter.com/ChicalinaT/status/1116101257494761473 A Hacker News user added, “While Facebook, Reddit, Twitter and now Mozilla may think they're doing a good thing by blocking what they consider hateful speech, it's just helping these people double down on thinking they're in the right. We should not be afraid of ideas. Speech != violence. Violence is violence. With platforms banning more and more offensive content and increasing the label of what is bannable, we're seeing a huge split in our world. People who could once agree to disagree now don't even want to share the same space with one another. It's all call out culture and it's terrible.” Many people think that this step is nothing but a step towards mass-censorship. “I see it as an active endorsement of filter funneling comments sections online, given that despite the operators of Dissenter having tried to make efforts to comply with the terms of service Mozilla have imposed for being listed in their gallery, were given an unclear rationale as to how having "broken" these terms, and no clue as to what they were supposed to do to have avoided doing so,” adds a Reddit user. Mozilla has not revoked the add-on’s signature, so Dissenter can be distributed while guaranteeing that the add-on is safe and can be updated automatically. Manual installation of the extension from Dissenter.com/download is also possible. Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 28414