Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Game Development

93 Articles
article-image-game-engine-wars-unity-vs-unreal-engine
Sugandha Lahoti
11 Apr 2018
6 min read
Save for later

Game Engine Wars: Unity vs Unreal Engine

Sugandha Lahoti
11 Apr 2018
6 min read
Ready Players. One Two Three! We begin with the epic battle between the two most prominent game engines out there: Unity vs Unreal Engine. Unreal Engine has been surviving for the past 20 years, the legacy engine while Unity, relatively new, (although it’s almost 12 years) is nevertheless an equal champion. We will be evaluating these engines across 6 major factors. Without further ado, let the games begin. Unity vs Unreal Engine Performance Performance is a salient factor when it comes to evaluating a game engine’s performance. The Unreal Engine uses C++. C++ is a lower level programming language that provides developers with more control over memory management. On top of this, Unreal Engine gives developers full access to the C++ source code allowing editing and upgrading anything in the system. Unity, on the other hand, uses C#, where the memory management is out of a developer’s control. No control over memory signifies that the garbage collector can trigger at random time and ruin performance. Unreal offers an impressive range of visual effects and graphical features. More importantly, they require no external plugins (unlike Unity) to create powerful FX, terrain, cinematics, gameplay logic, animation graphs, etc. However, UE4 seems to perform various basic actions considerably slower. Actions such as starting the engine, opening the editor, opening a project, saving projects, etc take a lot of time hampering the development process. Here’s where Unity takes the edge. It is also the go-to game engine when it comes to creating mobile games. Considering the above factors we can say, in terms of sheer performance, Unreal 4 takes the lead over Unity. But Unity may be making up for this shortfall by being more in sync with the times i.e., great for creating mobile games, impressive plugins for AR etc. Also read about Unity 2D and 3D game kits to simplify game development for beginners. Learning curve and Ease of development Unity provides an exhaustive list of resources to learn from. These documentations are packed with complete descriptions complemented with a number of examples as well as video and text tutorials and live training sessions. Along with the official Unity resources, there are also high-quality third-party tutorials available. The Unreal Engine offers developers a free development license and source code but for 5% royalty. The Unreal Engine 4 has Blueprint visual scripting. These tools are designed for non-programmers and designers to create games without writing a single line of code. They feature a better-at-glance game logic creation process, where flowcharts with connections between them are used for representing the program flow. These flowcharts make games a lot faster to prototype and execute. Unity offers an Asset store for developers to help them with all aspects of design. It features a mix of animation and rigging tools, GUI generators and motion capture software. It also has powerful asset management and attributes inspection. Unity is generally seen as the more intuitive and easier to grasp game engine. Unreal Engine features a simplistic UI that doesn’t take long to get up and running. With this, we can say, that both Unity and Unreal are at par in terms of ease of use. Unity vs Unreal Engine Graphics When it comes to graphics, Unreal Engine 4 is a giant. It includes capabilities to create high-quality 2D and 3D games with state-of-the-art techniques such as particle simulations systems, deferred shading, lit translucency, post-processing features and advanced dynamic lighting. Unity is also not far behind with features such as static batching, physically-based shading, shuriken particle system, low-level rendering access etc.  Although Unreal engine comes out to be the clear winner, if you don't need to create next-gen level graphics then having something like Unreal Engine 4 may not be required, and hence Unity wins. Platform Support/compatibility Unity is a clear winner when it comes to the number of platforms supported. Here’s a list of platforms offered by both Unity and Unreal. Platform Unreal Unity iOS Available Available Android Available Available VR Available Available (also HoloLens) Linux Available Available Windows PC Available Available Mac OS X Available Available SteamOS Available Available HTML5 Available Not Available Xbox One Available Available (also Xbox 360) PS4 Available Available Windows Phone 8 Not Available Available Tizen Not Available Available Android TV and Samsung Smart TV Not Available Available Web Player Not Available Available WebGL Not Available Available PlayStation Vita Not Available Available Community Support Community support is an essential criterion for evaluating a tool’s performance, especially true for free tools. Both Unity and Unreal have large and active communities. Forums and other community sources have friendly members that are quick to respond and help out. Having said that, a larger community of game developers contribute to Unity’s asset store. This saves significant time and effort, as developers can pick out special effects, sprites, animations, etc directly from the store rather than developing them from scratch. Correspondingly, more developers share tutorials and offer tech support on Unity. Unity vs Unreal Engine Pricing Unity offers a completely free version ready for download. This is a great option if you are new to game development.  The Unity Pro version, which offers additional tools and capabilities (such as the Unity profiler) comes at $1,500 as a one-time charge, or $75/month. Unreal Engine 4, on the other hand, is completely free. There are no Pro or Free versions. However, Unreal Engine 4 has a royalty fee of 5% on resulting revenue if it exceeds $3000 per quarter. Unreal Engine 4 is also completely free for colleges and universities, although the 5% royalty is still attached. Both game engines are extremely affordable, Unity gives you access to the free version, which is still a powerful engine. Unreal Engine 4 is of course completely free. The verdict The above analysis favors Unreal as the preferred gaming engine. In reality, though, it all boils down to the game developer. Choosing the right engine really depends on the type of game you want to create, your audience, and your expertise level (such as your choice of programming language). Both these engines are evolving and changing at a rapid pace and it is for the developer to decide where they want to head. Also, check out: Unity Machine Learning Agents: Transforming Games with Artificial Intelligence Unity plugins for augmented reality application development Unity releases ML-Agents v0.3: Imitation Learning, Memory-Enhanced Agents and more
Read more
  • 0
  • 2
  • 63283

article-image-steamvr-introduces-new-controllers-for-game-developers-the-steamvr-input-system
Sugandha Lahoti
16 May 2018
2 min read
Save for later

SteamVR introduces new controllers for game developers, the SteamVR Input system

Sugandha Lahoti
16 May 2018
2 min read
SteamVR announced new controllers adding accessibility features to the Virtual reality ecosystem. The SteamVR input system, lets you build controller bindings for any game, “even for controllers that didn’t exist when the game was written”, says Valve’s Joe Ludwig in a Steam forum post. What this essentially means is that any past, present or future game can hypothetically add support for any SteamVR compatible controller. Source: Steam community Supported controllers include the XBox One gamepad, Vive Tracker, Oculus Touch, and motion controllers for HTC Vive and Windows Mixed Reality VR headsets. The key-binding system of the SteamVR input system allows users to build binding configurations. Users can adapt the controls of games to take into account user behavior such as left-handedness, a disability, or personal preference. These configurations can also be shared easily with other users of the same game via the Steam Workshop. For developers, the new SteamVR input system means easier adaptation of games to diverse controllers. Developers entirely control the default bindings for each controller type. They can also offer alternate control schemes directly without the need to change the games themselves. SteamVR Input works with every SteamVR application; it doesn’t require developers to update their app to support it. Hardware designers are also free to try more types of input, apart from Vive Tracker, Oculus Touch etc. They can expose whatever input controls exist on their device and then describe that device to the system. Most importantly, the entire mechanism is captured in an easy to use UI that is available in-headset under the Settings menu. Source: Steam community For now, SteamVR Input is in beta. Details for developers are available on the OpenVR SDK 1.0.15 page. You can also see the documentation to enable native support in your applications. Hardware developers can read the driver API documentation to see how they can enable this new system for their devices. Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand-alone VR headset arrives! Google Daydream powered Lenovo Mirage solo hits the market
Read more
  • 0
  • 0
  • 47404

article-image-how-to-create-non-player-characters-npc-with-unity-2018
Amarabha Banerjee
26 Apr 2018
10 min read
Save for later

How to create non-player Characters (NPC) with Unity 2018

Amarabha Banerjee
26 Apr 2018
10 min read
Today, we will learn to create game characters while focusing mainly on non-player characters. Our Cucumber Beetles will serve as our game's non-player characters and will be the Cucumber Man's enemies. We will incorporate Cucumber Beetles in our game through direct placement. We will review the beetles' 11 animations and make changes to the non-player character's animation controller. In addition, we will write scripts to control the non-player characters. We will also add cucumber patches, cucumbers, and cherries to our game world. Understanding the non-player characters Non-player characters commonly referred to as NPCs, are simply game characters that are not controlled by a human player. These characters are controlled through scripts, and their behaviors are usually responsive to in-game conditions. Our game's non-player characters are the Cucumber Beetles. These beetles, as depicted in the following screenshot, have six legs that they can walk on; under special circumstances, they can also walk on their hind legs: Cucumber Beetles are real insects and are a threat to cucumbers. They cannot really walk on their hind legs, but they can in our game. Importing the non-player characters into our game You are now ready to import the asset package for our game's non-player character, the Cucumber Beetle. Go through the following steps to import the package: Download the Cucumber_Beetle.unitypackage file from the publisher's companion website In Unity, with your game project open, select Assets | Import Package | Custom Package from the top menu Navigate to the location of the asset package you downloaded in step 1 and click the Open button When presented with the Import Asset Package dialog window, click the Import button As you will notice, the Cucumber_Beetle asset package contains several assets related to the Cucumber Beetles, including a controller, scripts, a prefab, animations, and other assets: Now that the Cucumber_Beetle asset package has been imported into our game project, we should save our project. Use the File | Save Project menu option. Next, let's review what was imported. In the Project panel, under Assets | Prefabs, you will see a new Beetle.Prefab. Also in the Project panel, under Assets, you will see a Beetle folder. It is important that you understand what each component in the folder is for. Please refer to the following screenshot for an overview of the assets that you will be using in this chapter in regards to the Cucumber Beetle: The other assets in the previous screenshot that were not called out include a readme.txt file, the texture and materials for the Cucumber Beetle, and the source files. We will review the Cucumber Beetle's animations in the next section. Animating our non-player characters Several Cucumber Beetle animations have been prepared for use in our game. Here is a list of the animation names as they appear in our project, along with brief descriptions of how we will incorporate the animation into our game. The animations are listed in alphabetical order by name: Animation Name Usage Details Attack_Ground The beetle attacks the Cucumber Man's feet from the ground Attack_Standing The beetle attacks the Cucumber Man from a standing position Die_Ground The beetle dies from the starting position of on the ground Die_Standing The beetle dies from the starting position of standing on its hind legs Eat_Ground The beetle eats cucumbers while on the ground Idle_Ground The beetle is not eating, walking, fighting, or standing Idle_Standing The beetle is standing, but not walking, running, or attacking Run_Standing The beetle runs on its hind legs Stand The beetle goes from an on-the-ground position to standing (it stands up) Walk_Ground The beetle walks using its six legs Walk_Standing The beetle walks on its hind legs You can preview these animations by clicking on an animation file, such as Eat_Ground.fbx, in the Project panel. Then, in the Inspector panel, click the play button to watch the animation. There are 11 animations for our Cucumber Beetle, and we will use scripting, later to determine when an animation is played. In the next section, we will add the Cucumber Beetle to our game. Incorporating the non-player characters into our game First, let's simply drag the Beetle.Prefab from the Assets/Prefab folder in the Project panel to our game in Scene view. Place the beetle somewhere in front of the Cucumber Man so that the beetle can be seen as soon as you put the game into game mode. A suggested placement is illustrated in the following screenshot: When you put the game into game mode, you will notice that the beetle cycles through its animations. If you double-click the Beetle.controller in the Assets | Beetle folder in the Project panel, you will see, as shown in the following screenshot, that we currently have several animations set to play successively and repeatedly: This initial setup is intended to give you a first, quick way of previewing the various animations. In the next section, we will modify the animation controller. Working with the Animation Controller We will use an Animation Controller to organize our NPCs' animations. The Animation Controller will also be used to manage the transitions between animations. Before we start making changes to our Animation Controller, we need to identify what states our beetle has and then determine what transitions each state can have in relation to other states. Here are the states that the beetle can have, each tied to an animation: Idle on Ground Walking on Ground Eating on Ground Attacking on Ground Die on Ground Stand Standing Idle Standing Walk Standing Run Standing Attack Die Standing With the preceding list of states, we can assign the following transitions: From Idle on Ground to: Walking on Ground Running on Ground Eating on Ground Attacking on Ground Stand From Stand to: Standing Idle Standing Walk Standing Run Standing Attack Reviewing the transitions from Idle on Ground to Stand demonstrates the type of state-to-state transition decisions you need to make for your game. Let's turn our attention back to the Animation Controller window. You will notice that there are two tabs in the left panel of that window: Layers and Parameters. The Layers tab shows a Base Layer. While we can create additional layers, we do not need to do this for our game. The Parameters tab is empty, and that is fine. We will make our changes using the Layout area of the Animation Controller window. That is the area with the grid background. Let's start by making the following changes. For all 11 New State buttons, do the following: Left-click the state button Look in the Inspector panel to determine which animation is associated with the state button Rename the state name in the Inspector panel to reflect the animation. Click the return button Double-check the state button to ensure your change was made When you have completed the preceding five steps for all 11 states, your Animation Controller window should match the following screenshot: If you were to put the game into game mode, you would see that nothing has changed. We only changed the state names so they made more sense to us. So, we have some more work to do with the Animation Controller. Currently, the Attacking on Ground state is the default. That is not what we want. It makes more sense to have the Idle on Ground state to be our default. To make that change, right-click the Idle on Ground state and select Set as Layer Default State: Next, we need to make a series of changes to the state transitions. There are a lot of states and there will be a lot of transitions. In order to make things easier, we will start by deleting all the default transitions. To accomplish this, left-click each white line with an arrow and press your keyboard's Delete key. Do not delete the orange line that goes from Entry to Idle on Ground. After all transitions have been deleted, you can drag your states around so you have more working room. You might temporarily reorganize them in a manner similar to what is shown in the following screenshot: Our next task is to create all of our state transitions. Follow these steps for each state transition you want to add: Right-click the originating state. Select Create Transition. Click on the destination state. Once you have made all your transitions, you can reorganize your states to declutter the Animation Controller's layout area. A suggested final organization is provided in the following screenshot: As you can see in our final arrangement, we have 11 states and over two dozen transitions. You will also note that the Die on Ground and Die Standing states do not have any transitions. In order for us to use these animations in our game, they must be placed into an Animation Controller. Let's run a quick experiment: Select the Beetle character in the Hierarchy panel. In the Inspector panel, click the Add Component button. Select Physics | Box Collider. Click the Edit Collider button. Modify the size and position of the box collider so that it encases the entire beetle body. Click the Edit Collider button again to get out of edit mode. Your box collider should look similar to what is depicted in the following screenshot: Next, let's create a script that invokes the Die on Ground animation when the Cucumber Man character collides with the beetle. This will simulate the Cucumber Man stepping on the beetle. Follow these steps: Select the Beetle character in the Hierarchy panel. In the Inspector panel, click the Add Component button. Select New Script. Name the script BeetleNPC. Click the Create and Add button. In the project view, select Favorites | All Scripts | BeetleNPC. Double-click the BeetleNPC script file. Edit the script so that it matches the following code block: using System.Collections; using System.Collections.Generic; using UnityEngine; public class BeetleNPC : MonoBehaviour { Animator animator; // Use this for initialization void Start () { animator = GetComponent<Animator>(); } // Collision Detection Test void OnCollisionEnter(Collision col) { if (col.gameObject.CompareTag("Player")) { animator.Play("Die on Ground"); } } } This code detects a collision between the Cucumber Man and the beetle. If a collision is detected, the Die on Ground animation is played.  As you can see in the following screenshot, the Cucumber Man defeated the Cucumber Beetle: This short test demonstrated two important things that will help us further develop this game: Earlier in this section, you renamed all the states in the Animation Controller window. The names you gave the states are the ones you will reference in code. Since the animation we used did not have any transitions to other states, the Cucumber Beetle will remain in the final position of the animation unless we script it otherwise. So, if we had 100 beetles and defeated them all, all 100 would remain on their backs in the game world. This was a simple and successful scripting test for our Cucumber Beetle. We will need to write several more scripts to manage the beetles in our game. First, there are some game world modifications we will make. To summarize, we discussed how to create interesting character animations and bring them to life using the Unity 2018 platform. You read an extract from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. This book gives you a practical understanding of how to get started with Unity 2018. Read More Unity 2D & 3D game kits simplify Unity game development for beginners Build a Virtual Reality Solar System in Unity for Google Cardboard Unity plugins for augmented reality application development    
Read more
  • 0
  • 0
  • 38684

article-image-blender-celebrates-its-25th-birthday
Natasha Mathur
03 Jan 2019
3 min read
Save for later

Blender celebrates its 25th birthday!

Natasha Mathur
03 Jan 2019
3 min read
Blender, a free and open source 3D computer graphics software, celebrated its 25th birthday yesterday. Blender team celebrated the birthday by publishing a post that talked about the journey of blender from 1993 to 2018, taking a trip down the memory lane. Blender’s Journey (1994 - 2018) The Blender team states that during the 1993 Christmas Ton Roosendaal, creator of Blender started working on the Blender software, making use of the designs that he made during his 1993 course.                                                   Original design doc from 1993 The first blender version came to life on January 2nd, 1994 and used the subdivision-based windowing system working. This date has now been marked as Blender’s official Birthday and Roosendaal even has an old backup of this version on his SGI Indigo2 workstation. Blender was first released publicly online on 1st January 1998 as an SGI freeware. The Linux and Windows versions of Blender were released shortly after. In May 2002, Roosendaal started the non-profit Blender Foundation. The first goal for the Blender Foundation was to find a way to continue the development and promotion of Blender as a community-based open source project. https://www.youtube.com/watch?time_continue=164&v=8A-LldprfiE Blender's 25th birthday With the popularity of the internet in the early 2000s, the source code for Blender became available under GNU General Public License (GPL) on October 13th, 2002. This day marked Blender as the open source and free 3D creation software that we use till date. Blender team started “Project Orange” in 2005, that resulted in the world’s first and widely recognized Open Movie “Elephants Dream”. The success of the open movie project led to Roosendaal establishing the “Blender Institute” in summer 2007. Blender Institute has now become the permanent office and studio where the team organizes the Blender Foundation goals and facilitates the Open Projects related to 3D movies, games or visual effects. In early 2008, Roosendaal started the Blender 2.5 project, which was a major overhaul of the UI, tool definitions, data access system, event handling, and animation system. The main goal of the project was to bring the core of Blender to the contemporary interface standards as well as the input methods. The first alpha version for Blender 2.5 was presented on Siggraph 2009, with the final release of 2.5 getting published in 2011. In 2012, the Blender team put its focus on further developing and exploring a Visual Effect creation pipeline that included features such as motion tracking, camera solving, masking, grading and good color pipeline. Coming back to 2018, it was just last week when the Blender team released Blender 2.8 with a revamped user interface, high-end viewport, and other great features. Mozilla partners with Khronos Group to bring glTF format to Blender Building VR objects in React V2 2.0: Getting started with polygons in Blender Blender 2.5: Detailed Render of the Earth from Space
Read more
  • 0
  • 0
  • 38068

article-image-what-to-expect-in-unreal-engine-4-23
Vincy Davis
12 Jul 2019
3 min read
Save for later

What to expect in Unreal Engine 4.23?

Vincy Davis
12 Jul 2019
3 min read
A few days ago, Epic released the first preview of Unreal Engine 4.23 for the developer community to check out its features and report back in case of any issues, before the final release. This version has new additions of Skin Weight Profiles, VR Scouting tools, New Pro Video Codecs and many updates on features like XR, animation, core, virtual production, gameplay and scripting, audio and more. The previous version, Unreal Engine 4.22 focused on adding photorealism in real-time environments. Some updates in Unreal Engine 4.23 XR Hololens 2 Native Support: With updates to the Stereo Panoramic Capture tool, it will be much easier to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats, and to view those captures in an Oculus or GearVR headset. Stereo Panoramic capture Tool Improvements: This will make it easy to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats. Animation Skin Weight Profiles: The new Skin Weight Profile system will enable users to override the original Skin Weights that are stored with a Skeletal Mesh. Animation Streaming: This is aimed at improving memory management for animation data. Sub Animation Graphs: New Sub Anim Graphs will allow dynamic switching of sub-sections of an Animation Graph, enabling multi-user-collaboration and memory savings for vaulted or unavailable items. Core Unreal Insights Tool: This will help developers to collect and analyze data about the Engine's behavior in a uniform fashion. This system has three components: The Trace System API will gather information from runtime systems in a consistent format and captures it for later processing. Multiple live sessions can contribute data at the same time. The Analysis API will process data from the Trace System API, and convert it into a form that the Unreal Insights tool can use. The Unreal Insights tool will provide an interactive visualization of data processed through the Analysis API, which will provide developers with a unified interface for stats, logs, and metrics from their application. Virtual production Remote Control over HTTP Extended LiveLink Plugin New VR Scouting tools New Pro Video Codecs nDisplay: Warp and Blend for Curved Surfaces Virtual Camera Improvements Gameplay & Scripting UMG Widget Diffing: Expanded and improved Blueprint Diffing will now support Widget Blueprints as well as Actor and Animation Blueprints. Audio Open Sound Control: It will enable a native implementation of the Open Sound Control (OSC) standard in an Unreal Engine plugin. Wave Table Synthesis: The new monophonic Wavetable synthesizer leverages UE4’s built-in curve editor to author the time-domain wavetables, enabling a wide range of sound design capabilities can be driven by gameplay parameters. There are many more updates provided for the Editor, Niagara editor, Physics simulation, Rendering system and the Sequencer multi-track editor in Unreal Engine 4.23. The Unreal Engine team has notified users that the preview release is not fully quality tested, hence should be considered as unstable until the final release. Users are excited to try the latest version of Unreal Engine 4.23. https://twitter.com/ClicketyThe/status/1149070536762372096 https://twitter.com/cinedatabase/status/1149077027565309952 https://twitter.com/mygryphon/status/1149334005524750337 Visit the Unreal Engine page for more details. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices What’s new in Unreal Engine 4.19?
Read more
  • 0
  • 0
  • 33885

article-image-craftassist-an-open-source-framework-to-enable-interactive-bots-in-minecraft-by-facebook-researchers
Vincy Davis
19 Jul 2019
5 min read
Save for later

CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers

Vincy Davis
19 Jul 2019
5 min read
Two days ago, researchers from Facebook AI Research published a paper titled “CraftAssist: A Framework for Dialogue-enabled Interactive Agents”. The authors of this research are Facebook AI research engineers Jonathan Gray and Kavya Srinet, Facebook AI research scientist C. Lawrence Zitnick and Arthur Szlam and Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo and Siddharth Goyal. The paper describes the implementation of an assistant bot called CraftAssist which appears and interacts like another player, in the open sandbox game of Minecraft. The framework enables players to interact with the bot via in-game chat through various implemented tools and platforms. The players can also record these interactions through an in-game chat. The main aim of the bot is to be a useful and entertaining assistant to all the tasks listed and evaluated by the human players. Image Source: CraftAssist paper For motivating the wider AI research community to use the CraftAssist platform in their own experiments, Facebook researchers have open-sourced the framework, the baseline assistant, data and the models. The released data includes the functions which was used to build the 2,586 houses in Minecraft, the labeling data of the walls, roofs, etc. of the houses, human rephrasing of fixed commands, and the conversion of natural language commands to bot interpretable logical forms. The technology that allows the recording of human and bot interaction on a Minecraft server has also been released so that researcher will be able to independently collect data. Why is the Minecraft protocol used? Minecraft is a popular multiplayer volumetric pixel (voxel) 3D game based on building and crafting which allows multiplayer servers and players to collaborate and build, survive or compete with each other. It operates through a client and server architecture. The CraftAssist bot acts as a client and communicates with the Minecraft server using the Minecraft network protocol. The Minecraft protocol allows the bot to connect to any Minecraft server without the need for installing server-side mods. This lets the bot to easily join a multiplayer server along with human players or other bots. It also lets the bot to join an alternative server which implements the server-side component of the Minecraft network protocol. The CraftAssist bot uses a 3rd-party open source Cuberite server. It is a fast and extensible game server used for Minecraft. Read More: Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users How does the CraftAssist function? The block diagram below demonstrates how the bot interacts with incoming in-game chats and reaches the desired target. Image Source: CraftAssist paper Firstly, the incoming text is transformed into a logical form called the action dictionary. The action dictionary is then translated by a dialogue object which interacts with the memory module of the bot. This produces an action or a chat response to the user. The bot’s memory uses a relational database which is structured to recognize the relation between stored items of information. The major advantage of this type of memory is the easy to convert semantic parser, which is converted into a fully specified tasks. The bot responds to higher-level actions, called Tasks. Tasks are an interruptible process which follows a clear objective of step by step actions. It can adjust to long pauses between steps and can also push other Tasks onto a stack, like the way functions can call other functions in a standard programming language. Move, Build and Destroy are few of the many basic Tasks assigned to the bot. The The Dialogue Manager checks for illegal or profane words, then queries the semantic parser. The semantic parser takes the chat as input and produces an action dictionary. The action dictionary indicates that the text is a command given by a human and then specifies the high-level action to be performed by the bot. Once the task is created and pushed onto the Task stack, it is the responsibility of the command task ‘Move’ to compare the bot’s current location to the target location. This will make the bot to undertake a sequence of low-level step movements to reach the target. The core of the bot’s understanding of natural language depends on a neural semantic parser called the Text-toAction-Dictionary (TTAD) model. This model receives the incoming command/chat and then classifies it into an action dictionary which is interpreted by the Dialogue Object. The CraftAssist framework thus enables the bots in Minecraft to interact and play with players by understanding human interactions, using the implemented tools. The researchers hope that since the dataset of CraftAssist is now open-sourced, more developers will be empowered to contribute to this framework by assisting or training the bots, which might lead to the bots learning from human dialogue interactions, in the future. Developers have found the CraftAssist framework interesting. https://twitter.com/zehavoc/status/1151944917859688448 A user on Hacker News comments, “Wow, this is some amazing stuff! Congratulations!” Check out the paper CraftAssist: A Framework for Dialogue-enabled Interactive Agents for more details. Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects What to expect in Unreal Engine 4.23? A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news
Read more
  • 0
  • 0
  • 32220
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-unity-switches-to-webassembly-as-the-output-format-for-the-unity-webgl-build-target
Sugandha Lahoti
16 Aug 2018
2 min read
Save for later

Unity switches to WebAssembly as the output format for the Unity WebGL build target

Sugandha Lahoti
16 Aug 2018
2 min read
With the launch of Unity 2018.2 release last month, Unity is finally making the switch to WebAssembly as their output format for the Unity WebGL build target. WebAssembly support was first teased in Unity 5.6 as an experimental feature. Unity 2018.1 marked the removal of the experimental label. And finally in 2018.2, Web Assembly replaces asm.js as the default linker target. Source: Unity Blog WebAssembly replaced asm.js because it is faster, smaller and more memory-efficient, which are all pain points of the Unity WebGL export. A WebAssembly file is a binary file (which is a more compact way to deliver code), as opposed to asm.js, which is text. In addition, code modules that have already been compiled can be stored into an IndexedDB cache, resulting in a really fast startup when reloading the same content. In WebAssembly, the code size for an empty project is ~12% smaller or ~18% if 3D physics is included. Source: Unity Blog WebAssembly also has its own instruction set. In Development builds, it adds more precise error-detection in arithmetic operations. In non-development builds, this kind of detection of arithmetic errors is masked, so the user experience is not affected. Asm.js added a restriction on the size of the Unity Heap; its size had to be specified at build-time and could never change. WebAssembly enables the Unity Heap size to grow at runtime, which lets Unity content memory-usage exceed the initial heap size. Unity is now working on multi-threading support, which will initially be released as an experimental feature and will be limited to internal native threads (no C# threads yet). Debugging hasn’t got any better. While browsers have begun to provide WebAssembly debugging in their devtools suites, these debuggers do not yet scale well to Unity3D sizes of content. What’s next to come Unity is still working on new features and optimizations to improve startup times and performance: Asynchronous instantiation Structured cloning, which allows compiled WebAssembly to be cached in the browser Baseline and tiered compilation, to speed-up instantiation Streaming instantiation to compile Assembly code while downloading it Multi-Threading You can read the full details on the Unity Blog. Unity 2018.2: Unity release for this year second time in a row! GitHub for Unity 1.0 is here with Git LFS and file locking support What you should know about Unity 2018 Interface
Read more
  • 0
  • 0
  • 28646

article-image-following-epic-games-ubisoft-joins-blender-development-fund-adopts-blender-as-its-main-dcc-tool
Vincy Davis
23 Jul 2019
5 min read
Save for later

Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool

Vincy Davis
23 Jul 2019
5 min read
Yesterday, Ubisoft Animation Studio (UAS) announced that they will fund the development of Blender as a corporate Gold member through the Blender Foundation’s Development Fund. It has also been announced that Ubisoft will be adopting the open-source animation software Blender as their main digital content creation (DCC) tool. The exact funding amount has not been disclosed. Gold corporate members of the Blender development fund can have their prominent logo on blender.org dev fund page and have credit as Corporate Gold Member in blender.org and in official Blender foundation communication. The Gold corporate members also have a strong voice in approving projects for Blender. The Gold corporate members donate a minimum of EUR 30,000 as long as they remain a member. Pierrot Jacquet, Head of Production at UAS mentioned in the press release , “Blender was, for us, an obvious choice considering our big move: it is supported by a strong and engaged community, and is paired up with the vision carried by the Blender Foundation, making it one of the most rapidly evolving DCCs on the market.”  He also believes that since Blender is an open source project, it will allow Ubisoft to share some of their own developed tools with the community. “We love the idea that this mutual exchange between the foundation, the community, and our studio will benefit everyone in the end”, he adds. As part of their new workflow, Ubisoft is creating a development environment supported by open source and inner source solutions. The Blender software will replace Ubisoft’s in-house digital content creation tool and will be used to produce short content with the incubator. Later, the Blender software will also be used in Ubisoft’s upcoming shows in 2020. Per Jacquet, Blender 2.8 will be a “game-changer for the CGI industry”. Blender 2.8 beta is already out, and its stable version is expected to be released in the coming days. Ubisoft was impressed with the growth of the internal Blender community as well as with the innovations expected in Blender 2.8. Blender 2.8 will have a revamped UX, Grease Pencil, EEVEE real-time rendering, new 3D viewport and UV editor tools to enhance users gaming experience. Ubisoft was thus convinced that this is the “right time to bring support to our artists and productions that would like to add Blender to their toolkit.” This news comes a week after Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. With two big companies funding Blender, the future does look bright for them. The Blender 2.8 preview features is expected to have made both the companies step forward and support Blender, as both Epic and Ubisoft have announced their funding just days before the stable release of Blender 2.8. In addition to Epic and Ubisoft, corporate members include animation studio Tangent, Valve, Intel, Google, and Canonical's Ubuntu Linux distribution. Ton Roosendaal, founder and chairman of Blender Foundation is surely a happy man when he says that “Good news keeps coming”. He added, “it’s such a miracle to witness the industry jumping on board with us! I’ve always admired Ubisoft, as one of the leading games and media producers in the world. I look forward to working with them and help them find their ways as a contributor to our open source projects on blender.org.” https://twitter.com/tonroosendaal/status/1153376866604113920 Users are very happy and feel that this is a big step forward for Blender. https://twitter.com/nazzagnl/status/1153339812105064449 https://twitter.com/Nahuel_Belich/status/1153302101142978560 https://twitter.com/DJ_Link/status/1153300555986550785 https://twitter.com/cgmastersnet/status/1153438318547406849 Many also see this move as the industry’s way of sidelining Autodesk, the company which is popularly used for its DCC tools. https://twitter.com/flarb/status/1153393732261072897 A Hacker News user comments, “Kudos to blender's marketing team. They get a bit of free money from this. But the true motive for Epic and Unisoft is likely an attempt to strong-arm Autodesk into providing better support and maintenance. Dissatisfaction with Autodesk, lack of care for their DCC tools has been growing for a very long time now, but studios also have a huge investment into these tools as part of their proprietary pipelines.  Expect Autodesk to kowtow soon and make sure that none of these companies will make the switch. If it means that Autodesk actually delivers bug fixes for the version the customer has instead of one or two releases down the road, it is a good outcome for the studios.” Visit the Ubisoft website for more details. CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers What to expect in Unreal Engine 4.23? Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker
Read more
  • 0
  • 0
  • 28434

article-image-whats-new-in-unreal-engine-4-19
Sugandha Lahoti
16 Apr 2018
3 min read
Save for later

What's new in Unreal Engine 4.19?

Sugandha Lahoti
16 Apr 2018
3 min read
The highly anticipated Unreal Engine 4.19 is now generally available. This release hosts a new Live Link plugin, improvements to Sequencer, new Dynamic Resolution feature, and multiple workflow and usability improvements. In addition to all of these major updates, this release also features a massive 128 improvements based on queries submitted by the Unreal Engine developers community on GitHub. Unreal Engine 4.19 allows game developers to know exactly what their finished game will look like at every step of the development process. This update comes with three major goals: Let developers step inside the creative process. Build gaming worlds that run faster than ever before. Give developers full control. Here's a list of all the major features and what they bring to the game development process: New Unreal Engine 4.19 features Live Link Plugin Improvements The Maya Live Link Plugin is now available and can be used to establish a connection between Maya and UE4 to preview changes in real-time. Virtual Subjects are added to the Live Link. It can also be used with Motion Controllers. Live Link Sources can now define their own custom settings. Virtual Initialization function and Update DeltaTime parameter are also added to Live Link Retargeter API. Source: Unreal Engine blog Unified Unreal AR framework The Unreal Augmented Reality Framework provides a unified framework for building Augmented Reality (AR) apps for both Apple and Google handheld platforms using a single code path. Features include functions supporting Alignment, Light Estimation, Pinning, Session State, Trace Results, and Tracking. Temporal upsampling The new upscaling method, Temporal Upsample performs two separate screen percentages used for upscaling: Primary screen percentage that by default will use the spatial upscale pass as before; Secondary screen percentage that is a static, spatial only upscale at the very end of post-processing, before the UI draws. Dynamic resolution Dynamic Resolution adjusts the resolution to achieve the desired frame rate, for games on PlayStation 4 and Xbox One. It uses a heuristic to set the primary screen percentage based on the previous frames GPU workload. Source: Unreal Engine blog Physical light units All light units are now defined using physically based units. The new light unit property can be edited per light, changing how the engine interprets the intensity property when doing lighting related computations. Source: Unreal Engine blog Landscape rendering optimization The Landscape level of detail (LOD) system now uses screen size to determine detail for a component, similar to how the Static Mesh LOD system works. Starting from this release, all existing UE4 content that supports SteamVR is now compatible with HTC's newly-announced Vive Pro. These are just a select few updates to the Unreal Engine. The full list of release notes is available on the Unreal Engine forums.
Read more
  • 0
  • 0
  • 27964

article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 27741
article-image-unity-2019-2-releases-with-updated-probuilder-shader-graph-2d-animation-burst-compiler-and-more
Fatema Patrawala
31 Jul 2019
3 min read
Save for later

Unity 2019.2 releases with updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler and more

Fatema Patrawala
31 Jul 2019
3 min read
Yesterday, the Unity team announced the release of Unity 2019.2. In this release, they have added more than 170 new features and enhancements for artists, designers, and programmers. They have updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler, UI Elements, and many more. Major highlights for Unity 2019.2 ProBuilder 4.0 ships as verified with 2019.2. It is a unique hybrid of 3D modeling and level design tools, optimized for building simple geometry but capable of detailed editing and UV unwrapping as needed. Polybrush is now available via Package Manager as a Preview package. This versatile tool lets you sculpt complex shapes from any 3D model, position detail meshes, paint in custom lighting or coloring, and blend textures across meshes directly in the Editor. DSPGraph is the new audio rendering/mixing system, built on top of Unity’s C# Job System. It’s now available as a Preview package. They have improved on UI Elements, Unity’s new UI framework, which renders UI for graph-based tools such as Shader Graph, Visual Effect Graph, and Visual Scripting. To help you better organize your complex graphs, Unity has added subgraphs to Visual Effect Graph. You can share, combine, and reuse subgraphs for blocks and operators, and also embed complete VFX within VFX. There is an improvement in the integration between Visual Effect Graph and the High-Definition Render Pipeline (HDRP), which pulls VFX Graph in by default, providing you with additional rendering features. With Shader Graph you can now use Color Modes to highlight nodes on your graph with colors based on various features or select your own colors to improve readability. This is especially useful in large graphs. The team has added swappable Sprites functionality to the 2D Animation tool. With this new feature, you can change a GameObject’s rendered Sprites while reusing the same skeleton rig and animation clips. This lets you quickly create multiple characters using different Sprite Libraries or customize parts of them with Sprite Resolvers. With this release Burst Compiler 1.1 includes several improvements to JIT compilation time and some C# improvements. Additionally, the Visual Studio Code and JetBrains Rider integrations are available as packages. Mobile developers will benefit from improved OpenGL support, as the team has added OpenGL multithreading support (iOS) to improve performance on low-end iOS devices that don’t support Metal. As with all releases, 2019.2 includes a large number of improvements and bug fixes. You can find the full list of features, improvements, and fixes in Unity 2019.2 Release Notes. How to use arrays, lists, and dictionaries in Unity for 3D game development OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more! How to manage complex applications using Kubernetes-based Helm tool [Tutorial]
Read more
  • 0
  • 0
  • 27616

article-image-anime-studio-khara-switching-primary-3d-cg-tools-to-blender
Sugandha Lahoti
19 Aug 2019
4 min read
Save for later

Japanese Anime studio Khara is switching its primary 3D CG tools to Blender

Sugandha Lahoti
19 Aug 2019
4 min read
Popular Japanese animation studio Khara, announced on Friday that it will be moving to open source 3D software Blender as its primary 3D CG tool. Khara is a motion picture planning and production company and are currently working on “EVANGELION:3.0+1.0”, a film to be released in June 2020. Primarily, they will partially use Blender for ‘EVANGELION:3.0+1.0’ but will make the full switch once that project is finished. Khara is also helping the Blender Foundation by joining the Development Fund as a corporate member. Last month, Epic Games granted Blender $1.2 million in cash. Following Epic Games, Ubisoft also joined the Blender Development fund and adopted Blender as its main DCC tool. Why Khara opted for Blender? Khara had been using AutoDesk’s “3ds Max” as their primary tool for 3D CG so far. However, their project scale got bigger than what was possible with 3ds Max. 3ds Max is also quite expensive; according to Autodesk’s website, an annual fee for a single user is $2,396. Khara also had to reach out to small and medium-sized businesses for its projects. Another complaint was that Autodesk took time to release improvements to their proprietary software, which happens at a much faster rate in an open source software environment. They had also considered Maya as one of the alternatives, but dropped the idea as it resulted in duplication of work resource. Finally they switched to Blender, as it is open source and free. They were also intrigued by the new Blender 2.8 release which provided them with a 3D creation tool that worked like “paper and pencil”.  Blender’s Grease Pencil feature enables you to combine 2D and 3D worlds together right in the viewport. It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. “I feel the latest Blender 2.8 is intentionally ‘filling the gap’ with 3ds Max to make those users feel at home when coming to Blender. I think the learning curve should be no problem.”, told Mr. Takumi Shigyo, Project Studio Q Production Department. Khara founded “Project Studio Q, Inc.” in 2017, a company focusing mainly on the movie production and the training of Anime artists. Providing more information on their use of Blender, Hiroyasu Kobayashi, General Manager of Digital Dpt. and Director of Board of Khara, said in the announcement, “Preliminary testing has been done already. We are now at the stage to create some cuts actually with Blender as ‘on live testing’. However, not all the cuts can be done by Blender yet. But we think we can move out from our current stressful situation if we place Blender into our work flows. It has enough potential ‘to replace existing cuts’.” While Blender will be used for the bulk of the work, Khara does have a backup plan if there's anything Blender struggles with. Kobayashi added "There are currently some areas where Blender cannot take care of our needs, but we can solve it with the combination with Unity. Unity is usually enough to cover 3ds Max and Maya as well. Unity can be a bridge among environments." Khara is also speaking with their partner companies to use Blender together. Khara’s transition was well appreciated by people. https://twitter.com/docky/status/1162279830785646593 https://twitter.com/eoinoneillPDX/status/1154161101895950337 https://twitter.com/BesuBaru/status/1154015669110710273 Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 26998

article-image-ces-2019-top-announcements-made-so-far
Sugandha Lahoti
07 Jan 2019
3 min read
Save for later

CES 2019: Top announcements made so far

Sugandha Lahoti
07 Jan 2019
3 min read
CES 2019, the annual consumer electronics show in Las Vegas will go from Tuesday, Jan. 8 through Friday, Jan. 11. However, the conference has unofficially kicked off on Sunday, January 6, followed by press conferences on Monday, Jan. 7. Over the span of these two days, a lot of companies showcased their latest projects and announced new products, software, and services. Let us look at the key announcements made by prominent tech companies so far. Nvidia Nvidia CEO Jensen Huang unveiled some "amazing new technology innovations." First, they announced that over 40 new laptop models in 100-plus configurations will be powered by NVIDIA GeForce RTX GPUs. Turing-based laptops will be available across the GeForce RTX family — from RTX 2080 through RTX 2060 GPUs, said Huang. Seventeen of the new models will feature Max-Q design. Laptops with the latest GeForce RTX GPUs will also be equipped with WhisperMode, NVIDIA Battery Boost, and NVIDIA G-SYNC. GeForce RTX-powered laptops will be available starting Jan. 29 from the world's top OEMs. Nvidia also announced the first 65-inch 4K HDR gaming display that will arrive in February for $4,999. LG LG Electronics, which have a major press release today, has already confirmed a variety of their new products. These include the release of LG's 2019 TVs with Alexa and Google Assistant support, 8K OLED, full HDMI 2.1 support and more. Also includes, LG CineBeam Laser 4K projector for voice control, new sound bars included with Dolby Atmos and Google Assistant and LG Gram 17 and new 14-inch 2-in-1. Samsung Samsung announced that their Smart TVs will be soon equipped with iTunes Movies & TV Shows and will support AirPlay 2 beginning Spring 2019. AirPlay 2 support will be available on Samsung Smart TVs in 190 countries worldwide. Samsung is also launching a new Notebook Odyssey to take PC gaming more seriously posing a threat to competitors Razer and Alienware. HP HP also announced HP Chromebook 14, at CES 2019. It is the world's first AMD-powered Chromebook running on either an AMD A4 or A6 processor with integrated Radeon R4 or R5 graphics. It has 4GB of memory and 32GB of storage and support for Android apps from the Google Play Store. These models will start shipping in January starting at $269. More announcements: Asus launches a new 17-inch, 10-pound Surface Pro gaming laptop, the Asus ROG Mothership. It has also announced Zephyrus S GX701, the smallest and lightest 17-inch gaming laptop yet. Corsair’s impressive compact gaming desktops come with Core i9 chips and GeForce RTX graphics L’Oréal’s newest prototype detects wearers’ skin pH levels Acer’s new Swift 7 will kill the bezel when it launches in May for $1,699. It is one of the thinnest and lightest laptops ever made Audeze’s motion-aware headphones will soon recreate your head gestures in-game Whirlpool is launching a Wear OS app for its connected appliances with simplified voice commands for both Google Assistant and Alexa devices. Vuzix starts selling its AR smart glasses for $1,000 Pico Interactive just revealed the Pico G2 4K, an all-in-one 4K VR headset based-on China’s best-selling VR unit, the Pico G2. It’s incredibly lightweight, powerful and highly customizable for enterprise purposes. Features include kiosk mode, hands-free controls, and hygienic design. You can have a look at all products that will be showcased at CES 2019. NVIDIA launches GeForce Now’s (GFN) ‘recommended router’ program to enhance the overall performance and experience of GFN NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0 Uses of Machine Learning in Gaming
Read more
  • 0
  • 0
  • 26818
article-image-epic-games-grants-blender-1-2-million-in-cash-to-improve-the-quality-of-their-software-development-projects
Vincy Davis
16 Jul 2019
4 min read
Save for later

Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects

Vincy Davis
16 Jul 2019
4 min read
Yesterday, Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. Blender is a free and open-source 3D creation suite which supports a full range of tools to empower artists to create 3D graphics, animation, special effects or games. Ton Roosendaal, founder, and chairman of Blender Foundation thanked Epic Games in a statement. He said “Thanks to the grant we will make a significant investment in our project organization to improve on-boarding, coordination and best practices for code quality. As a result, we expect more contributors from the industry to join our projects.” https://twitter.com/tonroosendaal/status/1150793424536313862 The $1.2 million grant from Epic is part of their $100 million MegaGrants program which was announced this year in March. Tim Sweeney, CEO of Epic Games had announced that Epic will be offering $100 million in grants to game developers to boost the growth of the gaming industry by supporting enterprise professionals, media and entertainment creators, students, educators, and tool developers doing excellent work with Unreal Engine or enhancing open-source capabilities for the 3D graphics community. Sweeney believes that open tools, libraries, and platforms are critical to the future of the digital content ecosystem. “Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators”, he adds. This is the biggest award announced by Epic so far. Blender has no obligation to use or promote Epic Games’ storefront or engine considering this is a pure generous offer by Epic Games with “no strings attached”. In April, Magic Leap revealed that the company will provide 500 Magic Leap One Creator Edition spatial computing devices for giveaway as part of Epic MegaGrants program. Blender users are appreciative of the support and generosity of Epic Games. https://twitter.com/JeannotLandry/status/1150812155412963328 https://twitter.com/DomAnt2/status/1150798726379839488 A Redditor comments, “There's a reason Epic as a company has an extremely positive reputation with people in the industry. They've been doing this kind of thing for years, and a huge amount of money they're making from Fortnite is planned to be turned into grants as well.  Say what you want about them, they are without question the top company in gaming when it comes to actually using their profits to immediately reinvest/donate to the gaming industry itself. It doesn't hurt that every company who works with them consistently says that they're possibly the very best company in gaming to work with.” A comment on Hacker News read, “Epic are doing a great job improving fairness in the gaming industry, and the economic conditions for developers. I'm looking forward to their Epic Store opening up to more (high quality) Indie games.” In 2015, Epic had launched Unreal Dev Grants offering a pool of $5 million to independent developers with interesting projects in Unreal Engine 4 to fund the development of their projects. In December 2018, Epic had also launched an Epic game store where developers will get 88% of the earned revenue. The large sum donation of Epic to Blender holds more value considering the highly anticipated release of Blender 2.8 is around the corner. Though its release candidate is already out, users are quite excited for its stable release. Blender 2.8 will have new 3D viewport and UV editor tools to enhance users gaming experience. With Blender aiming to increase its quality of projects, such grants from major game publishers will only help them get bigger. https://twitter.com/ddiakopoulos/status/1150826388229726209 A user on Hacker News comments, “Awesome. Blender is on the cusp of releasing a major UI overhaul (2.8) that will make it more accessible to newcomers (left-click is now the default!). I'm excited to see it getting some major support from the gaming industry as well as the film industry.” What to expect in Unreal Engine 4.23? Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Blender celebrates its 25th birthday!
Read more
  • 0
  • 0
  • 25523

article-image-macos-gets-rpcs3-and-dolphin-using-gfx-portability-the-vulkan-portability-implementation-for-non-rust-apps
Melisha Dsouza
05 Sep 2018
2 min read
Save for later

macOS gets RPCS3 and Dolphin using Gfx-portability, the Vulkan portability implementation for non-Rust apps

Melisha Dsouza
05 Sep 2018
2 min read
The Vulkan Portability implementation, gfx-portability allows non-Rust applications that use Vulkan to run with ease. After improving the functionality of gfx-portability’s Metal backend through benchmarking Dota2, and verifying certain functionalities through the Vulkan Conformance Test Suite (CTS), developers are now planning to expand their testing to other projects that are open source, already using Vulcan for rendering and finally lacking strong macOS/Metal support. The projects which matched their criteria were  RPCS3 and Dolphin. However, the team discovered various issues with both RPCS3 and Dolphin projects. RPCS3 Blockers RPCS3 satisfies all the above mentioned criteria. It is an open-source Sony PlayStation 3 emulator and debugger written in C++ for Windows and Linux. RPCS3 has a Vulkan backend, and some attempts were made to support macOS previously. The gfx-rs team added a surface and swapchain support to start of with the macOS integration. This process identified a number of blockers in both gfx-rs and RPCS3. The RPCS3 developers and the gfx-rs teams collaborated to quickly address the blockers. Once the blockers were addressed, gameplay was rendered within RPCS3. Dolphin support for macOS Dolphin, the emulator for two recent Nintendo video game consoles, was actively working on adding support for macOS. While being tested with gfx-portability the teams noticed some further minor bugs in gfx. The issues were addressed and the teams were able to render real gameplay. Continuous Releases for the masses The team has already started automatically releasing gfx-portability binaries under GitHub latest release -> the portability repository. Currently the team provides MacOS (Metal) and Linux (Vulkan) binaries, and will add Windows (Direct3D 12/11 and Vulkan) binaries soon. These releases ensure that users don’t have to build gfx-portability themselves in order to test it with an existing project. The binaries are compatible with both the Vulkan loader on macOS and by linking the binaries directly from an application.   The team was successfully able to run RPCS3 and Dolphin on top of gfx-portability’s Metal backend and only had to address some minor issues in the process. Stability and performance will improve as more real world use cases are tested. You can read more about this on gfx-rs.github.io.   OpenAI Five loses against humans in Dota 2 at The International 2018 How to use artificial intelligence to create games with rich and interactive environments [Tutorial] Best game engines for AI game development  
Read more
  • 0
  • 0
  • 25504