Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Mobile

204 Articles
article-image-craftassist-an-open-source-framework-to-enable-interactive-bots-in-minecraft-by-facebook-researchers
Vincy Davis
19 Jul 2019
5 min read
Save for later

CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers

Vincy Davis
19 Jul 2019
5 min read
Two days ago, researchers from Facebook AI Research published a paper titled “CraftAssist: A Framework for Dialogue-enabled Interactive Agents”. The authors of this research are Facebook AI research engineers Jonathan Gray and Kavya Srinet, Facebook AI research scientist C. Lawrence Zitnick and Arthur Szlam and Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo and Siddharth Goyal. The paper describes the implementation of an assistant bot called CraftAssist which appears and interacts like another player, in the open sandbox game of Minecraft. The framework enables players to interact with the bot via in-game chat through various implemented tools and platforms. The players can also record these interactions through an in-game chat. The main aim of the bot is to be a useful and entertaining assistant to all the tasks listed and evaluated by the human players. Image Source: CraftAssist paper For motivating the wider AI research community to use the CraftAssist platform in their own experiments, Facebook researchers have open-sourced the framework, the baseline assistant, data and the models. The released data includes the functions which was used to build the 2,586 houses in Minecraft, the labeling data of the walls, roofs, etc. of the houses, human rephrasing of fixed commands, and the conversion of natural language commands to bot interpretable logical forms. The technology that allows the recording of human and bot interaction on a Minecraft server has also been released so that researcher will be able to independently collect data. Why is the Minecraft protocol used? Minecraft is a popular multiplayer volumetric pixel (voxel) 3D game based on building and crafting which allows multiplayer servers and players to collaborate and build, survive or compete with each other. It operates through a client and server architecture. The CraftAssist bot acts as a client and communicates with the Minecraft server using the Minecraft network protocol. The Minecraft protocol allows the bot to connect to any Minecraft server without the need for installing server-side mods. This lets the bot to easily join a multiplayer server along with human players or other bots. It also lets the bot to join an alternative server which implements the server-side component of the Minecraft network protocol. The CraftAssist bot uses a 3rd-party open source Cuberite server. It is a fast and extensible game server used for Minecraft. Read More: Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users How does the CraftAssist function? The block diagram below demonstrates how the bot interacts with incoming in-game chats and reaches the desired target. Image Source: CraftAssist paper Firstly, the incoming text is transformed into a logical form called the action dictionary. The action dictionary is then translated by a dialogue object which interacts with the memory module of the bot. This produces an action or a chat response to the user. The bot’s memory uses a relational database which is structured to recognize the relation between stored items of information. The major advantage of this type of memory is the easy to convert semantic parser, which is converted into a fully specified tasks. The bot responds to higher-level actions, called Tasks. Tasks are an interruptible process which follows a clear objective of step by step actions. It can adjust to long pauses between steps and can also push other Tasks onto a stack, like the way functions can call other functions in a standard programming language. Move, Build and Destroy are few of the many basic Tasks assigned to the bot. The The Dialogue Manager checks for illegal or profane words, then queries the semantic parser. The semantic parser takes the chat as input and produces an action dictionary. The action dictionary indicates that the text is a command given by a human and then specifies the high-level action to be performed by the bot. Once the task is created and pushed onto the Task stack, it is the responsibility of the command task ‘Move’ to compare the bot’s current location to the target location. This will make the bot to undertake a sequence of low-level step movements to reach the target. The core of the bot’s understanding of natural language depends on a neural semantic parser called the Text-toAction-Dictionary (TTAD) model. This model receives the incoming command/chat and then classifies it into an action dictionary which is interpreted by the Dialogue Object. The CraftAssist framework thus enables the bots in Minecraft to interact and play with players by understanding human interactions, using the implemented tools. The researchers hope that since the dataset of CraftAssist is now open-sourced, more developers will be empowered to contribute to this framework by assisting or training the bots, which might lead to the bots learning from human dialogue interactions, in the future. Developers have found the CraftAssist framework interesting. https://twitter.com/zehavoc/status/1151944917859688448 A user on Hacker News comments, “Wow, this is some amazing stuff! Congratulations!” Check out the paper CraftAssist: A Framework for Dialogue-enabled Interactive Agents for more details. Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects What to expect in Unreal Engine 4.23? A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news
Read more
  • 0
  • 0
  • 32220

article-image-snapchat-source-code-leaked-and-posted-to-github
Richard Gall
09 Aug 2018
2 min read
Save for later

Snapchat source code leaked and posted to GitHub

Richard Gall
09 Aug 2018
2 min read
Source code for what is believed to be a small part of Snapchat's iOS application was posted on GitHub after being leaked back in May. After being notified, Snap Inc., Snapchat's parent company, immediately filed a DMCA request to GitHub to get the code removed. A copy of the request was found by a 'security researcher' tweeting from the handle @x0rz, who shared a link to a copy of the request on GitHub: https://twitter.com/x0rz/status/1026735377955086337 You can read the DMCA request in full here. [caption id="attachment_21477" align="aligncenter" width="916"] Part of the Snap Inc. DMCA request to GitHub[/caption] The initial leak back in May was caused by an update to the Snapchat iOS application. A spokesperson for Snap Inc. explained to CNET: "An iOS update in May exposed a small amount of our source code and we were able to identify the mistake and rectify it immediately... We discovered that some of this code had been posted online and it has been subsequently removed. This did not compromise our application and had no impact on our community." This code was then published by a someone using the name Khaled Alshehri, believed to be based in Pakistan, on GitHub. The repository created - called Source-SnapChat - has now been taken down. A number of posts linked to the GitHub account suggests that the leaker had tried to contact Snapchat but had been ignored. "I will post it again until I get a reply" they said. https://twitter.com/i5aaaald/status/1025639490696691712 Leaked Snapchat code is still being traded privately Although GitHub has taken the repo down, it's not hard to find people claiming they have a copy of the code that they're willing to trade: https://twitter.com/iSn0we/status/1026738393353465858 Now the code is out in the wild it will take more than a DMCA request to get things under control. Although it would appear the leaked code isn't substantial enough to give much away to potential cybercriminals, it's likely that Snapchat is now working hard to make the changes required to tighten its security.  Read next Snapchat is losing users – but revenue is up 15 year old uncovers Snapchat’s secret visual search function
Read more
  • 0
  • 0
  • 31836

article-image-virtual-reality-solar-system-unity-google-cardboard
Sugandha Lahoti
25 Apr 2018
21 min read
Save for later

Build a Virtual Reality Solar System in Unity for Google Cardboard

Sugandha Lahoti
25 Apr 2018
21 min read
In today's tutorial, we will feature visualization of a newly discovered solar system. We will leverage the virtual reality development process for this project in order to illustrate the power of VR and ease of use of the Unity 3D engine. This project is dioramic scene, where the user floats in space, observing the movement of planets within the TRAPPIST-1 planetary system. In February 2017, astronomers announced the discovery of seven planets orbiting an ultra-cool dwarf star slightly larger than Jupiter. We will use this information to build a virtual environment to run on Google Cardboard (Android and iOS) or other compatible devices: We will additionally cover the following topics: Platform setup: Download and install the platform-specific software needed to build an application on your target device. Experienced mobile developers with the latest Android or iOS SDK may skip this step. Google Cardboard setup: This package of development tools facilitates display and interaction on a Cardboard device. Unity environment setup: Initializing Unity's Project Settings in preparation for a VR environment. Building the TRAPPIST-1 system: Design and implement the Solar System project. Build for your device: Build and install the project onto a mobile device for viewing in Google Cardboard. Platform setup Before we begin building the solar system, we must setup our computer environment to build the runtime application for a given VR device. If you have never built a Unity application for Android or iOS, you will need to download and install the Software Development Kit (SDK) for your chosen platform. An SDK is a set of tools that will let you build an application for a specific software package, hardware platform, game console, or operating system. Installing the SDK may require additional tools or specific files to complete the process, and the requirements change from year to year, as operating systems and hardware platforms undergo updates and revisions. To deal with this nightmare, Unity maintains an impressive set of platform-specific instructions to ease the setup process. Their list contains detailed instructions for the following platforms: Apple Mac Apple TV Android iOS Samsung TV Standalone Tizen Web Player WebGL Windows For this project, we will be building for the most common mobile devices: Android or iOS. The first step is to visit either of the following links to prepare your computer: Android: Android users will need the Android Developer Studio, Java Virtual Machine (JVM), and assorted drivers. Follow this link for installation instructions and files: https://docs.unity3d.com/Manual/Android-sdksetup.html. Apple iOS: iOS builds are created on a Mac and require an Apple Developer account, and the latest version of Xcode development tools. However, if you've previously built an iOS app, these conditions will have already been met by your system. For the complete instructions, follow this link: https://docs.unity3d.com/Manual/iphone-GettingStarted.html. Google Cardboard setup Like the Unity documentation website, Google also maintains an in-depth guide for the Google VR SDK for Unity set of tools and examples. This SDK provides the following features on the device: User head tracking Side-by-side stereo rendering Detection of user interactions (via trigger or controller) Automatic stereo configuration for a specific VR viewer Distortion correction Automatic gyro drift correction These features are all contained in one easy-to-use package that will be imported into our Unity scene. Download the SDK from the following link, before moving on to the next step: http://developers.google.com/cardboard/unity/download. At the time of writing, the current version of the Google VR SDK for Unity is version 1.110.1 and it is available via a GitHub repository. The previous link should take you to the latest version of the SDK. However, when starting a new project, be sure to compare the SDK version requirements with your installed version of Unity. Setting up the Unity environment Like all projects, we will begin by launching Unity and creating a new project. The first steps will create a project folder which contains several files and directories: Launch the Unity application. Choose the New option after the application splash screen loads. Create a new project by launching the Unity application. Save the project as Trappist1 in a location of your choice, as demonstrated in Figure 2.2: To prepare for VR, we will adjust the Build Settings and Player Settings windows. Open Build Settings from File | Build Settings. Select the Platform for your target device (iOS or Android). Click the Switch Platform button to confirm the change. The Unity icon in the right-hand column of the platform panel indicates the currently selected build platform. By default, it will appear next to the Standalone option. After switching, the icon should now be on Android or iOS platform, as shown in Figure 2.3: Note for Android developers: Ericsson Texture Compression (ETC) is the standard texture compression format on Android. Unity defaults to ETC (default), which is supported on all current Android devices, but it does not support textures that have an alpha channel. ETC2 supports alpha channels and provides improved quality for RBG textures on Android devices that support OpenGL ES 3.0. Since we will not need alpha channels, we will stick with ETC (default) for this project: Open the Player Settings by clicking the button at the bottom of the window. The PlayerSetting panel will open in the Inspector panel. Scroll down to Other Settings (Unity 5.5 thru 2017.1) or XR Settings and check the Virtual Reality Supported checkbox. A list of choices will appear for selecting VR SDKs. Add Cardboard support to the list, as shown in Figure 2.4: You will also need to create a valid Bundle Identifier or Package Name under Identification section of Other Settings. The value should follow the reverse-DNS format of the com.yourCompanyName.ProjectName format using alphanumeric characters, periods, and hyphens. The default value must be changed in order to build your application. Android development note: Bundle Identifiers are unique. When an app is built and released for Android, the Bundle Identifier becomes the app's package name and cannot be changed. This restriction and other requirements are discussed in this Android documentation link: http://developer.Android.com/reference/Android/content/pm/PackageInfo.html. Apple development note: Once you have registered a Bundle Identifier to a Personal Team in Xcode, the same Bundle Identifier cannot be registered to another Apple Developer Program team in the future. This means that, while testing your game using a free Apple ID and a Personal Team, you should choose a Bundle Identifier that is for testing only, you will not be able to use the same Bundle Identifier to release the game. An easy way to do this is to add Test to the end of whatever Bundle Identifier you were going to use, for example, com.MyCompany.VRTrappistTest. When you release an app, its Bundle Identifier must be unique to your app, and cannot be changed after your app has been submitted to the App Store. Set the Minimum API Level to Android Nougat (API level 24) and leave the Target API on Automatic. Close the Build Settings window and save the project before continuing. Choose Assets | Import Package | Custom Package... to import the GoogleVRForUnity.unitypackage previously downloaded from http://developers.google.com/cardboard/unity/download. The package will begin decompressing the scripts, assets, and plugins needed to build a Cardboard product. When completed, confirm that all options are selected and choose Import. Once the package has been installed, a new menu titled GoogleVR will be available in the main menu. This provides easy access to the GoogleVR documentation and Editor Settings. Additionally, a directory titled GoogleVR will appear in the Project panel: Right-click in the Project and choose Create | Folder to add the following directories: Materials, Scenes, and Scripts. Choose File | Save Scenes to save the default scene. I'm using the very original Main Scene and saving it to the Scenes folder created in the previous step. Choose File | Save Project from the main menu to complete the setup portion of this project. Building the TRAPPIST-1 System Now that we have Unity configured to build for our device, we can begin building our space themes VR environment. We have designed this project to focus on building and deploying a VR experience. If you are moderately familiar with Unity, this project will be very simple. Again, this is by design. However, if you are relatively new, then the basic 3D primitives, a few textures, and a simple orbiting script will be a great way to expand your understanding of the development platform: Create a new script by selecting Assets | Create | C# Script from the main menu. By default, the script will be titled NewBehaviourScript. Single click this item in the Project window and rename it OrbitController. Finally, we will keep the project organized by dragging OrbitController's icon to the Scripts folder. Double-click the OrbitController script item to edit it. Doing this will open a script editor as a separate application and load the OrbitController script for editing. The following code block illustrates the default script text: using System.Collections; using System.Collections.Generic; using UnityEngine; public class OrbitController : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } } This script will be used to determine each planet's location, orientation, and relative velocity within the system. The specific dimensions will be added later, but we will start by adding some public variables. Starting on line 7, add the following five statements: public Transform orbitPivot; public float orbitSpeed; public float rotationSpeed; public float planetRadius; public float distFromStar; Since we will be referring to these variables in the near future, we need a better understanding of how they will be used: orbitPivot stores the position of the object that each planet will revolve around (in this case, it is the star TRAPPIST-1). orbitalSpeed is used to control how fast each planet revolves around the central star. rotationSpeed is how fast an object rotates around its own axis. planetRadius represents a planet's radius compared to Earth. This value will be used to set the planet's size in our environment. distFromStar is a planet's distance in Astronomical Units (AU) from the central star. Continue by adding the following lines of code to the Start() method of the OrbitController script: // Use this for initialization void Start () { // Creates a random position along the orbit path Vector2 randomPosition = Random.insideUnitCircle; transform.position = new Vector3 (randomPosition.x, 0f, randomPosition.y) * distFromStar; // Sets the size of the GameObject to the Planet radius value transform.localScale = Vector3.one * planetRadius; } As shown within this script, the Start() method is used to set the initial position of each planet. We will add the dimensions when we create the planets, and this script will pull those values to set the starting point of each game object at runtime: Next, modify the Update() method by adding two additional lines of code, as indicated in the following code block: // Update is called once per frame. This code block updates the Planet's position during each // runtime frame. void Update () { this.transform.RotateAround (orbitPivot.position, Vector3.up, orbitSpeed * Time.deltaTime); this.transform.Rotate (Vector3.up, rotationSpeed * Time.deltaTime); } This method is called once per frame while the program is running. Within Update(), the location for each object is determined by computing where the object should be during the next frame. this.transform.RotateAround uses the sun's pivot point to determine where the current GameObject (identified in the script by this) should appear in this frame. Then this.transform.Rotate updates how much the planet has rotated since the last frame. Save the script and return to Unity. Now that we have our first script, we can begin building the star and its planets. For this process, we will use Unity's primitive 3D GameObject to create the celestial bodies: Create a new sphere using GameObject | 3D Object | Sphere. This object will represent the star TRAPPIST-1. It will reside in the center of our solar system and will serve as the pivot for all seven planets. Right-click on the newly created Sphere object in the Hierarchy window and select Rename. Rename the object Star. Using the Inspector tab, set the object to Position: 0,0,0 and Scale: 1,1,1. With the Star selected, locate the Add Component button in the Inspector panel. Click the button and enter orbitcontroller in the search box. Double-click on the OrbitController script icon when it appears. The script is now a component of the star. Create another sphere using GameObject | 3D Object | Sphere and position it anywhere in the scene, with the default scale of 1,1,1. Rename the object Planet b. Figure 2.5, from the TRAPPIST-1 Wikipedia page, shows the relative orbital period, distance from the star, radius, and mass of each planet. We will use these dimensions and names to complete the setup of our VR environment. Each value will be entered as public variables for their associated GameObjects: Apply the OrbitController script to the Planet b asset by dragging the script icon to the planet in the Scene window or the Planet b object in the Hierarchy window. Planet b is our first planet and it will serve as a prototype for the rest of the system. Set the Orbit Pivot point of Planet b in the Inspector. Do this by clicking the Selector Target next to the Orbit Pivot field (see Figure 2.6). Then, select Star from the list of objects. The field value will change from None (Transform) to Star (Transform). Our script will use the origin point of the select GameObject as its pivot point. Go back and select the Star GameObject and set the Orbit Pivot to Star as we did with Planet b. Save the scene: Now that our template planet has the OrbitController script, we can create the remaining planets: Duplicate the Planet b GameObject six times, by right-clicking on it and choosing Duplicate. Rename each copy Planet c through Planet h. Set the public variables for each GameObject, using the following chart: GameObject Orbit Speed Rotation Speed Planet Radius Dist From Star Star 0 2 6 0 Planet b .151 5 0.85 11 Planet c .242 5 1.38 15 Planet d .405 5 0.41 21 Planet e .61 5 0.62 28 Planet f .921 5 0.68 37 Planet g 1.235 5 1.34 45 Planet h 1.80 5 0.76 60 Table 2.1: TRAPPIST-1 gameobject Transform settings Create an empty GameObject by right clicking in the Hierarchy panel and selecting Create Empty. This item will help keep the Hierarchy window organized. Rename the item Planets and drag Planet b—through Planet h into the empty item. This completes the layout of our solar system, and we can now focus on setting a location for the stationary player. Our player will not have the luxury of motion, so we must determine the optimal point of view for the scene: Run the simulation. Figure 2.7 illustrates the layout being used to build and edit the scene. With the scene running and the Main Camera selected, use the Move and Rotate tools or the Transform fields to readjust the position of the camera in the Scene window or to find a position with a wide view of the action in the Game window; or a position with an interesting vantage point. Do not stop the simulation when you identify a position. Stopping the simulation will reset the Transform fields back to their original values. Click the small Options gear in the Transform panel and select Copy Component. This will store a copy of the Transform settings to the clipboard: Stop the simulation. You will notice that the Main Camera position and rotation have reverted to their original settings. Click the Transform gear again and select Paste Component Values to set the Transform fields to the desired values. Save the scene and project. You might have noticed that we cannot really tell how fast the planets are rotating. This is because the planets are simple spheres without details. This can be fixed by adding materials to each planet. Since we really do not know what these planets look like we will take a creative approach and go for aesthetics over scientific accuracy. The internet is a great source for the images we need. A simple Google search for planetary textures will result in thousands of options. Use a collection of these images to create materials for the planets and the TRAPPIST-1 star: Open a web browser and search Google for planet textures. You will need one texture for each planet and one more for the star. Download the textures to your computer and rename them something memorable (that is, planet_b_mat...). Alternatively, you can download a complete set of textures from the Resources section of the supporting website: http://zephyr9.pairsite.com/vrblueprints/Trappist1/. Copy the images to the Trappist1/Assets/Materials folder. Switch back to Unity and open the Materials folder in the Project panel. Drag each texture to its corresponding GameObject in the Hierarchy panel. Notice that each time you do this Unity creates a new material and assigns it to the planet GameObject: Run the simulation again and observe the movement of the planets. Adjust the individual planet Orbit Speed and Rotation Speed to feel natural. Take a bit of creative license here, leaning more on the scene's aesthetic quality than on scientific accuracy. Save the scene and the project. For the final design phase, we will add a space themed background using a Skybox. Skyboxes are rendered components that create the backdrop for Unity scenes. They illustrate the world beyond the 3D geometry, creating an atmosphere to match the setting. Skyboxes can be constructed of solids, gradients, or images using a variety of graphic programs and applications. For this project, we will find a suitable component in the Asset Store: Load the Asset Store from the Window menu. Search for a free space-themed skybox using the phrase space skybox price:0. Select a package and use the Download button to import the package into the Scene. Select Window | Lighting | Settings from the main menu. In the Scene section, click on the Selector Target for the Skybox Material and choose the newly downloaded skybox: Save the scene and the project. With that last step complete, we are done with the design and development phase of the project. Next, we will move on to building the application and transferring it to a device. Building the application To experience this simulation in VR, we need to have our scene run on a head-mounted display as a stereoscopic display. The app needs to compile the proper viewing parameters, capture and process head tracking data, and correct for visual distortion. When you consider the number of VR devices we would have to account for, the task is nothing short of daunting. Luckily, Google VR facilitates all of this in one easy-to-use plugin. The process for building the mobile application will depend on the mobile platform you are targeting. If you have previously built and installed a Unity app on a mobile device, many of these steps will have already been completed, and a few will apply updates to your existing software. Note: Unity is a fantastic software platform with a rich community and an attentive development staff. During the writing of this book, we tackled software updates (5.5 through 2017.3) and various changes in the VR development process. Although we are including the simplified building steps, it is important to check Google's VR documentation for the latest software updates and detailed instructions: Android: https://developers.google.com/vr/unity/get-started iOS: https://developers.google.com/vr/unity/get-started-ios Android Instructions If you are just starting out building applications from Unity, we suggest starting out with the Android process. The workflow for getting your project export from Unity to playing on your device is short and straight forward: On your Android device, navigate to Settings | About phone or Settings | About Device | Software Info. Scroll down to Build number and tap the item seven times. A popup will appear, confirming that you are now a developer. Now navigate to Settings | Developer options | Debugging and enable USB debugging. Building an Android application In your project directory (at the same level as the Asset folder), create a Build folder. Connect your Android device to the computer using a USB cable. You may see a prompt asking you to confirm that you wish to enable USB debugging on the device. If so, click OK. In Unity, select File | Build Settings to load the Build dialog. Confirm that the Platform is set to Android. If not choose Android and click Switch Platform. Note that Scenes/Main Scene should be loaded and checked in the Scenes In Build portion of the dialog. If not, click the Add Open Scenes button to add Main Scene to the list of scenes to be included in the build. Click the Build button. This will create an Android executable application with the .apk file extension. Invalid command Android error Some Android users have reported an error relating to the Android SDK Tools location. The problem has been confirmed in many installations prior to Unity 2017.1. If this problem occurs, the best solution is to downgrade to a previous version of the SDK Tools. This can be done by following the steps outlined here: Locate and delete the Android SDK Tools folder [Your Android SDK Root]/tools. This location will depend on where the Android SDK package was installed. For example, on my computer the Android SDK Tools folder is found at C:UserscpalmerAppDataLocalAndroidsdk. Download SDK Tools from http://dl-ssl.google.com/Android/repository/tools_r25.2.5-windows.zip. Extract the archive to the SDK root directory. Re-attempt the Build project process. If this is the first time you are creating an Android application, you might get an error indicating that Unity cannot locate your Android SDK root directory. If this is the case, follow these steps: Cancel the build process and close the Build Settings... window. Choose Edit | Preferences... from the main menu. Choose External Tools and scroll down to Android. Enter the location of your Android SDK root folder. If you have not installed the SDK, click the download button and follow the installation process. Install the app onto your phone and load the phone into your Cardboard device: iOS Instructions The process for building an iOS app is much more involved than the Android process. There are two different types of builds: Build for testing Build for distribution (which requires an Apple Developer License) In either case, you will need the following items to build a modern iOS app: A Mac computer running OS X 10.11 or later The latest version of Xcode An iOS device and USB cable An Apple ID Your Unity project For this demo, we will build an app for testing and we will assume you have completed the Getting Started steps (https://docs.unity3d.com/Manual/iphone-GettingStarted.html) from Section 1. If you do not yet have an Apple ID, obtain one from the Apple ID site (http://appleid.apple.com/). Once you have obtained an Apple ID, it must be added to Xcode: Open Xcode. From the menu bar at the top of the screen, choose Xcode | Preferences. This will open the Preferences window. Choose Accounts at the top of the window to display information about the Apple IDs that have been added to Xcode. To add an Apple ID, click the plus sign at the bottom left corner and choose Add Apple ID. Enter your Apple ID and password in the resulting popup box. Your Apple ID will then appear in the list. Select your Apple ID. Apple Developer Program teams are listed under the heading of Team. If you are using the free Apple ID, you will be assigned to Personal Team. Otherwise, you will be shown the teams you are enrolled in through the Apple Developer Program. Preparing your Unity project for iOS Within Unity, open the Build Settings from the top menu (File | Build Settings). Confirm that the Platform is set to iOS. If not choose iOS and click Switch Platform at the bottom of the window. Select the Build & Run button. Building an iOS application Xcode will launch with your Unity project. Select your platform and follow the standard process for building an application from Xcode. Install the app onto your phone and load the phone into your Cardboard device. We looked at the basic Unity workflow for developing VR experiences. We also provided a stationary solution so that we could focus on the development process. The Cardboard platform provides access to VR content from a mobile platform, but it also allows for touch and gaze controls. You read an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create compelling Virtual Reality experiences for mobile and desktop with three top platforms—Cardboard VR, Gear VR, and OculusVR. Read More Top 7 modern Virtual Reality hardware systems Virtual Reality for Developers: Cardboard, Gear VR, Rift, and Vive    
Read more
  • 0
  • 2
  • 26487

article-image-macos-catalina-is-now-available-for-download
Sugandha Lahoti
08 Oct 2019
3 min read
Save for later

macOS Catalina is now available for download

Sugandha Lahoti
08 Oct 2019
3 min read
Apple released macOS Catalina today as the next major update to the company’s Mac operating system. With macOS Catalina, iTunes is now broken into separate apps for Apple Music, Podcasts, and Apple TV. Catalina also features Apple Arcade game subscription service and Sidecar, which extends Mac desktops to a second display. For developers, Catalina has Mac Catalyst to build versions of iPad apps compatible with Mac. macOS Catalina was officially revealed in June at the WWDC 2019 and the public beta was released later in June. What’s new in macOS Catalina Sidecar Sidecar basically extends your Mac workspace by using an iPad as a second display-  both wirelessly and when plugged in. Sidecar also supports the Apple Pencil, letting you work on any Mac app or third-party Mac app that supports stylus input. According to an Apple white paper, the only laptops that Sidecar works on are: MacBooks from 2016 or later, MacBook Airs from 2018 or later, and MacBook Pros from 2016 or later. All of them use Apple’s butterfly keyboard. Addition of Apple Arcade Apple Arcade game subscription service is available at $4.99 per month to play games on Mac. Apple Arcade subscribers get the full version of every game including all updates and expansions, without any ads or additional in-game purchases. The service is launching with a 30-day free trial and a single subscription includes access for up to six family members with Family Sharing. iTunes replaced with new entertainment apps iTunes saw it’s long-awaited death and was replaced by three new apps, Apple Music, Apple Podcasts and Apple TV. Music app features over 50 million songs, playlists, and music videos. Apple Podcasts offers more than 700,000 shows in its catalog. Apple TV+, Apple’s video subscription service, will be available in the Apple TV app for Mac starting November 1 Removal of iTunes, however, is a problem for DJs who rely on XML files to sort through file libraries and quickly find tracks while performing. According to Apple, along with Catalina’s removal of iTunes, users are also losing XML file support as all native music playback on Macs moves over to the official Music app, which has a new library format. https://twitter.com/danideahl/status/1181342504949633025 Additional features You also have Screen Time on macOS and stricter privacy protections. Apps will have to ask for permission to access the desktop, documents, iCloud Drive, and external storage. With activation lock, any Macs that have a T2 security chip cannot be erased and reactivated without Apple ID password. ‘Find My App’ combines ‘Find My iPhone’ and ‘Find My Friends’ into a single, easy-to-use app on Mac, iPad, and iPhone. Mail in macOS Catalina adds the ability to block email from a specified sender, mute an overly active thread and unsubscribe from commercial mailing lists. The macOS Catalina update is a free download, and it can be installed by clicking on the Apple icon in the upper left corner of your screen, choosing system preferences, and then selecting software update. Apple bans HKmap.live, a Hong Kong protest safety app from the iOS Store as it makes people ‘evade law enforcement’. Apple iPadOS now available for download with Slide Over and Split View, Home Screen updates, and more. Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, Apple TV+ and more
Read more
  • 0
  • 0
  • 26229

article-image-wireguard-launches-an-official-macos-app
Savia Lobo
19 Feb 2019
2 min read
Save for later

WireGuard launches an official MacOS app

Savia Lobo
19 Feb 2019
2 min read
Last week, the team at WireGuard announced an initial version of WireGuard for macOS and have also launched the app in the Mac App Store. The app makes use of Apple’s Network Extension API to provide native integration into the operating system's networking stack. WireGuard is a fast, modern, and secure VPN tunnel. The new app can import new tunnels from archives and files, or you can create one from scratch. Developers said that it is currently undergoing rapid development, and taking feedback from users in implementing new and exciting features. According to TechCrunch, “The app is a drop-down menu in the menu bar. You can manage your tunnel and activate on-demand connections for some scenarios. For instance, you could choose to activate your VPN exclusively if you’re connected to the internet using Wi-Fi, and not Ethernet.” Source: TechCrunch Developers say “because the app uses these deep integration APIs, we're only allowed to distribute the application using the macOS App Store (whose rejections, appeals, and eventual acceptance made for quite the stressful saga over the last week and a half).” To know more about this announcement, read WireGuard’s email thread. Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with cryptominers Final release for macOS Mojave is here with new features, security changes and a privacy flaw
Read more
  • 0
  • 0
  • 24432

article-image-introducing-opendrop-an-open-source-implementation-of-apple-airdrop-written-in-python
Vincy Davis
21 Aug 2019
3 min read
Save for later

Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python

Vincy Davis
21 Aug 2019
3 min read
A group of German researchers recently published a paper “A Billion Open Interfaces for Eve and Mallory: MitM, DoS, and Tracking Attacks on iOS and macOS Through Apple Wireless Direct Link”, at the 28th USENIX Security Symposium (August 14–16), USA. The paper reveals security and privacy vulnerabilities in Apple’s AirDrop file-sharing service as well as denial-of-service (DoS) attacks which leads to privacy leaks or simultaneous crashing of all neighboring devices. As part of the research, Milan Stute and Alexander Heinrich, two researchers have developed an open-source implementation of Apple AirDrop written in Python - OpenDrop. OpenDrop is like a FOSS implementation of AirDrop. It is an experimental software and is the result of reverse engineering efforts by the Open Wireless Link project (OWL). It is compatible with Apple AirDrop and used for sharing files among Apple devices such as iOS and macOS or on Linux systems running an open re-implementation of Apple Wireless Direct Link (AWDL). The OWL project consists of researchers from the Secure Mobile Networking Lab at TU Darmstadt looking into Apple’s wireless ecosystem. It aims to assess security, privacy and enables cross-platform compatibility for next-generation wireless applications. Currently, OpenDrop only supports Apple devices. However, it does not support all features of AirDrop and may be incompatible with future AirDrop versions. It uses the current version of OpenSSL and libarchive and requires Python 3.6+ version. OpenDrop is licensed under the GNU General Public License v3.0. It is not affiliated with or endorsed by Apple Inc. Limitations in OpenDrop Triggering macOS/iOS receivers via Bluetooth Low Energy: Since Apple devices begin their AWDL interface and AirDrop server only after receiving a custom advertisement via Bluetooth LE, it is possible that Apple AirDrop receivers may not be discovered. Sender/Receiver authentication and connection state: Currently, OpenDrop does not conduct peer authentication. It does not verify that the TLS certificate is signed by Apple's root or not. Also, OpenDrop accepts any file that it receives automatically. Sending multiple files: OpenDrop does not support sending multiple files for sharing, a feature supported by Apple’s AirDrop. Users are excited to try the new OpenDrop implementation. A Redditor comments, “Yesssss! Will try this out soon on Ubuntu.” Another comment reads, “This is neat. I did not realize that enough was known about AirDrop to reverse engineer it. Keep up the good work.” Another user says, “Wow, I can’t wait to try this out! I’ve been in the Apple ecosystem for years and AirDrop was the one thing I was really going to miss.” Few Android users wish to see such implementations in an Android app. A user on Hacker News says, “Would be interesting to see an implementation of this in the form of an Android app, but it looks like that might require root access.” A Redditor comments, “It'd be cool if they were able to port this over to android as well.” To know how to send and receive files using OpenDrop, check out its Github page. Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020 Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage ‘FaceTime Attention Correction’ in iOS 13 Beta 3 uses ARKit to fake eye contact
Read more
  • 0
  • 0
  • 23889
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-ionic-framework-4-0-has-just-been-released-now-backed-by-web-components-not-angular
Richard Gall
23 Jan 2019
4 min read
Save for later

Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular

Richard Gall
23 Jan 2019
4 min read
Ionic Framework today released Ionic Framework 4.0. The release is a complete rebuild of the popular JavaScript framework for developing mobile and desktop apps. Although Ionic has, up until now, Ionic was built using Angular components, this new version has instead been built using Web Components. This is significant, as it changes the whole ball game for the project. It means Ionic Framework is now an app development framework that can be used alongside any front end frameworks, not just Angular. The shift away from Angular makes a lot of sense for the project. It now has the chance to grow adoption beyond the estimated five million developers around the world already using the framework. While in the past Ionic could only be used by Angular developers, it now opens up new options for development teams - so, rather than exacerbating a talent gap in many organizations, it could instead help ease it. However, although it looks like Ionic is taking a significant step away from Angular, it's important to note that, at the moment, Ionic Framework 4.0 is only out on general availability for Angular - it's still only in Alpha for Vue.js and React. Ionic Framework 4.0 and open web standards Although the move to Web Components is the stand-out change in Ionic Framework 4.0, it's also worth noting that the release has been developed in accordance with open web standards. This has been done, according to the team, to help organizations develop Design Systems (something the Ionic team wrote about just a few days ago) - essentially, using a set of guidelines and components that can be reused across multiple platforms and products to maintain consistency across various user experience touch points. Why did the team make the changes to Ionic Framework 4.0 that they did? According to Max Lynch, Ionic Framework co-founder and CEO, the changes present in Ionic Framework 4.0 should help organizations achieve brand consistency quickly, and to give development teams the option of using Ionic with their JavaScript framework of choice. Lynch explains: "When we look at what’s happening in the world of front-end development, we see two major industry shifts... First, there’s a recognition that the proliferation of proprietary components has slowed down development and created design inconsistencies that hurt users and brands alike. More and more enterprises are recognizing the need to adopt a design system: a single design spec, or library of reusable components, that can be shared across a team or company. Second, with the constantly evolving development ecosystem, we recognized the need to make Ionic compatible with whatever framework developers wanted to use—now and in the future. Rebuilding our Framework on Web Components was a way to address both of these challenges and future-proof our technology in a truly unique way." What does Ionic Framework 4.0 tell us about the future of web and app development? Ionic Framework 4.0 is a really interesting release as it tells us a lot about where web and app development is today. It confirms to us, for example, that Angular's popularity is waning. It also suggests that Web Components are going to be the building blocks of the web for years to come - regardless of how frameworks evolve. As Lynch writes in a blog post introducing Ionic Framework 4.0, "in our minds, it was clear Web Components would be the way UI libraries, like Ionic, would be distributed in the future. So, we took a big bet and started porting all 100 of our components over." Ionic Framework 4.0 also suggests that Progressive Web Apps are here to stay too. Lynch writes in the blog post linked to above that "for Ionic to reach performance standards set by Google, new approaches for asynchronous loading and delivery were needed." To do this, he explains, the team "spent a year building out a web component pipeline using Stencil to generate Ionic’s components, ensuring they were tightly packed, lazy loaded, and delivered in smart collections consisting of components you’re actually using." The time taken to ensure that the framework could meet those standards - essentially, that it could support high performance PWAs - underscores that this will be one of the key use cases for Ionic in the years to come.  
Read more
  • 0
  • 0
  • 23828

article-image-what-to-expect-from-d-programming-language-in-the-near-future
Fatema Patrawala
17 Oct 2019
3 min read
Save for later

What to expect from D programming language in the near future

Fatema Patrawala
17 Oct 2019
3 min read
On Tuesday, Atila Neves the Deputy leader for D programming language posted about his vision for D and what he would like to do with D lang in the near future. Make D programming language default for web dev and mobile applications D’s static reflection and code generation capabilities make it an ideal candidate to implement a codebase that needs to be called from several different languages and environments (e.g. Python, Java, R). Traditionally this is done by specifying data structures and RPC calls in an Interface Definition Language (IDL) then translating that to the supported languages, with a wire protocol to go along with it. With D, none of that is necessary. One can write the production code in D and have libraries automatically making the code callable from other languages. Hence it will be easy to write D code that runs as fast or faster than the alternatives, and it will be a win on all fronts. Memory Safety for D lang Atila believes that D is a systems programming language with value types and pointers, it isn’t memory safe. He says that DIP1000 is in the right direction, but it still needs to be memory safe unless programmers opt-out via @trusted block or function. The DIP1000 proposal includes a scope mechanism that will know when the lifetime of a reference is over by providing a mechanism to guarantee that a reference cannot escape lexical scope. Thus it can safely implement memory management schemes rather than tracing the garbage collection. Safe and easy concurrency in D programming language As per Atila safe and easy concurrency in D is mostly achieved through actor models, but they still need to finalize shards and make everything @safe as well. Centralizing all reflection needs with an API Atila says instead of disparate ways of getting things done with fragmented APIs like (__traits, std.traits, custom code), he would like there to be a library that centralizes all reflection needs with a great API. Easy interoperability for C++ developers C++ has been successful so far in making the transition from C virtually seamless. Atila wants current C++ programmers with legacy codebases to just as easily be able to start writing D code. Faster development times D needs a fast interpreter so that developers can skip machine code generation and linking. This should be the default way of running unittest blocks for faster feedback, with programmers only compiling their code for runtime performance and/or to ship binaries to final users. String interpolation in D programming language Code generation is one of D’s greatest strengths, and token strings enable visually pleasing blocks of code that are actually “just strings”. Hence, String interpolation would make it vastly easier to use. To know more about D programming language, check out the official post by Atila Neves. “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett The V programming language is now open source – is it too good to be true? Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 23815

article-image-ionic-react-rc-is-now-out
Bhagyashree R
16 Aug 2019
3 min read
Save for later

Ionic React RC is now out!

Bhagyashree R
16 Aug 2019
3 min read
Earlier this year, the Ionic team released the beta version of Ionic React. After receiving the developer feedback and contributions from the community, the team launched Ionic React RC on Wednesday. The team says that this first major release of Ionic React was made possible by Ionic 4.0. Previously, Ionic was built using Angular components, but Ionic 4.0 was rewritten to use Web Components. This change made the Ionic Framework, an app development framework that can be used alongside any front end frameworks, not just Angular. Why is Ionic React needed Explaining the motivation behind Ionic React, Ely Lucas, Software Engineer & Dev Advocate at Ionic wrote in the announcement, “Ionic React RC marks the first major release of our vision to bring Ionic development to more developers on other frameworks.” Though it is possible to import the core Ionic components directly into React projects, this method does not provide a good developer experience. Also, when working with web components in React, you need to write some boilerplate code to properly communicate with the web components. Ionic React will essentially work as a “thin wrapper” around the core components of Ionic and will export them as native React components. It will also handle the boilerplate code for you. However, you still need to write a few features in the native framework such as page lifetime management and lifecycle methods. You can do this by extending the react-router package with @ionic/react-router. Considering this is a release candidate, the team is not expecting many major changes. Sharing the team’s next steps, Lucas said, “We will be looking closely at any issues that pop up during the RC phase and working on some final code stabilization and minor bug fixes...We also plan on creating some more content and guides in the docs to help with some best practices we’ve found when working with Ionic React.” The team is now seeking developer feedback before they come up with the final release. If you encounter any issues, you can report it on the GitHub repo and tag the issue with “package react”. To know further updates on Ionic React, you can also have a chat with the team who will be present at React Rally from August 22-23 at Salt Lake City, UT. This is a community conference that brings together developers of all backgrounds using React.js, React Native, and related tools. Many developers are excited about this update. Here’s what a few Twitter users are saying: https://twitter.com/clandestoapp/status/1161893695194636289 https://twitter.com/miniallaghi/status/1161964880913719297 Check out the official announcement by the Ionic team to know more in detail. The Ionic team announces the release of Ionic React Beta Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular Ionic v4 RC released with improved performance, UI Library distribution and more  
Read more
  • 0
  • 0
  • 23467

article-image-valve-announces-half-life-alyx-its-first-flagship-vr-game
Savia Lobo
19 Nov 2019
3 min read
Save for later

Valve announces Half-Life: Alyx, its first flagship VR game

Savia Lobo
19 Nov 2019
3 min read
Yesterday, Valve Corporation, the popular American video game developer, announced the Half-Life: Alyx, the first new game in the popular Half-Life series in over a decade. The company tweeted that it will unveil the first look on Thursday, 21st November 2019, at 10 am Pacific Time. https://twitter.com/valvesoftware/status/1196566870360387584 Half-Life: Alyx, a brand-new game in the Half-Life universe, is designed exclusively for PC virtual reality systems (Valve Index, Oculus Rift, HTC Vive, Windows Mixed Reality). Talking about Valve’s history in PC games, it has created some of the most influential and critically games ever made. However, “Valve has famously never finished either of its Half-Life supposed trilogies of games. After Half-Life and Half-Life 2, the company created Half-Life: Episode 1 and Half-Life: Episode 2, but no third game in the series,” the Verge reports. Ars Technica reveals, “The game's name confirms what has been loudly rumored for months: that you will play this game from the perspective of Alyx Vance, a character introduced in 2004's Half-Life 2. Instead of stepping forward in time, HLA will rewind to the period between the first two mainline Half-Life games.” “A data leak from Valve's Source 2 game engine, as uncovered in September by Valve News Network, pointed to a new control system labeled as the "Grabbity Gloves" in its codebase. Multiple sources have confirmed that this is indeed a major control system in HLA,” Ars Technica claims. These Grabbity gloves can also be described as ‘Magnet gloves’, which allow pointing out and attracting distant objects to your hands. Valve has already announced plans to support all major VR PC systems for its next VR game, and these new gloves seem like the right system to scale to whatever controllers that would come to VR. Many gamers are excited to check out this Half-life version and are also looking forward to whether the company really stands up to what it says. A user on Hacker News commented, “Wonder what Valve is doubling down with this title? It seems like the previous games were all ground-breaking narratives, but with most of the storytellers having left in the last few years, I'd be curious to see what makes this different than your standard VR games.” Another user on Hacker News commented, “From the tech side it was the heavy, and smart, use of scripting that made HL1 stand out. With HL2 it was the added physics engine trough the change to Source, back then that used to be a big deal and whole gameplay mechanics revolve around that (gravity gun). In that context, I do not really consider it that surprising for the next HL project to focus on VR because even early demos of that combination looked already very promising 5 years ago” We will update this space after the Half-Life: Alyx is unveiled on Thursday. To know more about the announcement in detail, read Ars Technica’s complete coverage. Valve reveals new Index VR Kit with detail specs and costs upto $999 Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces? Oculus Rift S: A new VR with inside-out tracking, improved resolution and more!
Read more
  • 0
  • 0
  • 22884
article-image-homebrew-2-2-releases-with-support-for-macos-catalina
Vincy Davis
28 Nov 2019
3 min read
Save for later

Homebrew 2.2 releases with support for macOS Catalina

Vincy Davis
28 Nov 2019
3 min read
Yesterday, the project manager of Homebrew, Mike McQuaid, announced the release of Homebrew 2.2. This is the third release of Homebrew this year. Some of the major highlights of this new version include support to macOS Catalina, faster implementations of  HOMEBREW_AUTO_UPDATE_SECS and brew upgrade’s post-install dependent checking, and more. Read More: After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption New key features in Homebrew 2.2 Homebrew will now support macOS Catalina (10.15), support to macOS Sierra (10.12) and older are unsupported The speed of the no-op case for HOMEBREW_AUTO_UPDATE_SECS has become extremely fast and defaults to 5 minutes instead of 1 The brew upgrade will no longer give an unsuccessful error code if the formula is up-to-date. Homebrew upgrade’s post-install dependent checking is now exceedingly faster and reliable. Homebrew on Linux has updated and raised its minimum requirements. Starting from Homebrew 2.2, the software package management system will use OpenSSL 1.1. The Homebrew team has disabled the brew tap-pin since it was buggy and not much used by Homebrew maintainers. It will stop supporting Python 2.7 by the end of 2019 as it will reach EOL. Read More: Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications Many users are excited about this release and have appreciated the maintainers of Homebrew for their efforts. https://twitter.com/DVJones89/status/1199710865160843265 https://twitter.com/dirksteins/status/1199944492868161538 A user on Hacker News comments, “While Homebrew is perhaps technically crude and somewhat inflexible compared to other and older package managers, I think it deserves real credit for being so easy to add packages to. I contributed Homebrew packages after a few weeks of using macOS, while I didn't contribute a single package in the ten years I ran Debian. I'm also impressed by the focus of the maintainers and their willingness for saying no and cutting features. We need more of that in the programming field. Homebrew is unashamedly solely for running the standard configuration of the newest version of well-behaved programs, which covers at least 90% of my use cases. I use Nix when I want something complicated or nonstandard.” To know about the features in detail, head over to Hombrew’s official page. Announcing Homebrew 2.0.0! Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more! Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks? ActiveState adds thousands of curated Python packages to its platform Firefox Preview 3.0 released with Enhanced Tracking Protection, Open links in Private tab by default and more
Read more
  • 0
  • 0
  • 22763

article-image-google-releases-oboe-a-c-library-to-build-high-performance-android-audio-apps
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Google releases Oboe, a C++ library to build high-performance Android audio apps

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Google released the first production-ready version of Oboe. It is a C++ library for building real-time audio apps. One of its main benefits includes the lowest possible audio latency across the widest range of Android devices. It is similar to AndroidX for native audio. How Oboe works The communication between apps and Oboe happens by reading and writing data to streams.  This library facilitates the movement of audio data between your app and the audio inputs and outputs on your Android device. The apps are able to pass data in and out by reading from and writing to audio streams, represented by the class AudioStream. A stream consists of the following: Audio device An audio device is a hardware interface or virtual endpoint that acts as a source or sink for a continuous stream of digital audio data. For example, a built-in mic or bluetooth headset. Sharing mode The sharing mode determines whether a stream has exclusive access to an audio device that might otherwise be shared among multiple streams. Audio format This the format of the audio data in the stream. The data that is passed through a stream has the usual digital audio attributes, which developers must specify when defining a stream. These are as follows: Sample format Samples per frame Sample rate The following sample formats are allowed by Oboe: Source: GitHub What are its benefits Oboe leverages the improved performance and features of AAudio on Orea MR1 (API 27+) and also maintains backward compatibility on API 16+. The following are some of its benefits: You write and maintain less code: It uses C++ allowing you to write clean and elegant code. With Oboe you can create an audio stream in just three lines of code whereas, when using OpenSL ES the same thing requires 50+ lines. Accelerated release process: As Oboe is supplied as a source library, bug fixes can be rolled out in few days as opposed to the Android platform release cycle. Better bug handling and less guesswork: It provides workarounds for known audio bugs and has sensible default behaviour for stream properties. Open source: It is open source and maintained by Google engineers. To get started with Oboe, check out the full documentation and the code samples available on its GitHub repository. Also, read the announcement posted on the Android Developers Blog. What role does Linux play in securing Android devices? A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Google announces updates to Chrome DevTools in Chrome 71
Read more
  • 0
  • 0
  • 22662

article-image-apples-macos-catalina-kills-itunes-and-drops-support-for-32-bit-applications
Fatema Patrawala
09 Oct 2019
4 min read
Save for later

Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications

Fatema Patrawala
09 Oct 2019
4 min read
Yesterday, Apple released MacOS Catalina, its latest update for Macs and MacBooks. The new operating system can be installed from the homepage of its App Store. Catalina brings a host of new features, including the option to use apps from the iPad as well as turn the tablet into an additional display for computers. But this new update kills iTunes and faces some major issues. Apple has confirmed that there are some serious issues in MacOS Catalina, and affected consumers should refrain from updating the OS until these issues are addressed. Catalina is finally the download that kills iTunes, which is nowhere to be found in the new update. Instead, Apple has moved the features of iTunes into their own separate Music app, the new update also includes separate apps for Podcasts and TV. MacOS Catalina update is a big problem for DJs who rely on iTunes The Mac platform is especially popular with DJs, who cart around MacBook Pro machines jam-packed with music, playlists, mixes and specialist software to allow them to perform every evening. These have been tied to iTunes’ underlying XML database. But after nearly 2 decades, iTunes are discontinued in macOS Catalina, and the XML file no longer exists to index a local music collection. This has broken popular and niche music tools alike, including some of the major titles such as Traktor and Rekordbox. The Verge reports that Apple has confirmed that this issue is down to its removal of the XML file, but is handing responsibility to the third-party developers behind each app. Unfortunately, for Apple’s reputation, those developers have been expecting the ability for the new standalone Music app to explore an XML file, a feature Apple suggested would be available until they could code around the lack of XML. Fact Mag also reported, “this news contradicts Apple’s earlier assertion that there would be a way to manually export the XML file from the new Music app, though Catalina’s launch yesterday now proves this isn’t the case at all.” Apple advice DJs that, if you rely on a software that needs this XML file to function, then do not update to Catalina until individual developers have issued compatibility updates for the new operating system. Catalina drops support for 32-bit applications and faces other issues as well Catalina also drops support for 32-bit applications. The 32-bit applications will simply not run under the new system, this version of macOS is a 64-bit only. If you are a Mac user that is reliant on a 32-bit app, then you have just a single dialog on installation that warns of the loss of support. And with these there are other questions which a user will need answers to like, you would need to know which of your apps are 32-bit and which are 64-bit? And if they are mission-critical in your role and is a 64-bit alternative available? It's not just this, a number of creative tools, including Apple Aperture, Microsoft Office 2011 and Adobe CS6 are also experiencing issues with Catalina. Additionally, there are issues with font in MacOS Catalina, as per the Chromium blog, the macOS system font appears "off" -- too light / tight kerning. It is clear that Apple wants to push forward with its platforms, but it needs to remember that the hardware has to work in the real world today. Apple should be consistent in what features it offers, it should provide clear and accurate information to developers and users, and it should ensure the very least that its own store is in order. TextMate 2.0, the text editor for macOS releases MacOS terminal emulator, iTerm2 3.3.0 is here with new Python scripting API, a scriptable status bar, Minimal theme, and more Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more! WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Apple plans to make notarization a default requirement in all future macOS updates
Read more
  • 0
  • 0
  • 22614
article-image-ionic-framework-announces-ionic-4-beta
Sugandha Lahoti
30 Jul 2018
3 min read
Save for later

Ionic framework announces Ionic 4 Beta

Sugandha Lahoti
30 Jul 2018
3 min read
Ionic has been a popular UI library among web developers because of it being completely framework-agnostic. Now they have announced the much anticipated Ionic 4 Beta release with a focus on performance, build time improvements, theming and multi-framework capabilities, and more. Although the release is in beta, Ionic invites developers to try testing and migrate their existing ionic-angular apps. You need to install the latest version of the Ionic CLI (4.0.0) using: npm install -g ionic Since v4 is still in beta, creating projects with it requires a flag when starting a new Ionic app: ionic start myApp tabs --type=angular Your v4 beta app will be created using ng cli conventions and the new Ionic 4 components. Let’s talk about the features Ionic 4 comes packed with. Focus on Web standards Ionic 4 has been rebuilt using standard Web APIs, and each component is packaged as a standards-compliant Web Component. With this standardization, the framework will now rely solely on APIs browser's native support keeping the public API for each component stable. The Ionic team has also developed and open-sourced a Web Component compiler Stencil, to use Web Components for each component more easily. Ionic 4 completely embraces modern Web APIs such as Custom Elements, CSS Variables, and Shadow DOM. Updates for Angular This release also adopts new Angular tooling and features following Angular standards and conventions to make Ionic 4, Angular’s leading mobile solution. Angular developers can now use the Angular CLI directly for Ionic apps. ionic-app-scripts are now replaced with Angular CLI and Router. Changes to Documentation Ionic 4 comes with a completely redesigned Ionic Framework documentation. The documentation increases load and navigation performance, making it easier to update and maintain. There are more examples and previews along with more code snippets. The new docs are built with Stencil. Other improvements New Ionicons 4.0 with reduced sizes, and brand new icon forms reflecting the latest iOS and Material Design styles. Ionic Native 5.0 Beta has been upgraded to be framework independent. Check out the new Native API Docs CLI 4.0, is heavily refactored offering powerful Cordova integration with livereload, custom schematics for generators, and support for multiple projects. New CLI Docs, provide more information in a cleaner and easier to read layout. Shadow DOM, makes it easy to reduce the amount of client-side code by embracing native browser APIs and web-standards. Additionally, Shadow Dom helps in easily consuming Ionic components from any web app by encapsulating its styles. You can learn more about the Ionic 4 Beta release in the Ionic 4 beta docs. The Migration Guide and the Installation guide are also available. Ionic Components Creating Our First App with Ionic How to use SQLite with Ionic to store data?
Read more
  • 0
  • 0
  • 22311

article-image-new-iphone-exploit-checkm8-is-unpatchable-and-can-possibly-lead-to-permanent-jailbreak-on-iphones
Sugandha Lahoti
30 Sep 2019
4 min read
Save for later

New iPhone exploit checkm8 is unpatchable and can possibly lead to permanent jailbreak on iPhones

Sugandha Lahoti
30 Sep 2019
4 min read
An unnamed iOS researcher that goes by the Twitter handle @axi0mX has released a new iOS exploit, checkm8 that affects all iOS devices running on A5 to A11 chipsets. This exploit explores vulnerabilities in Apple’s bootroom (secure boot ROM) which can give phone owners and hackers deep level access to their iOS devices. Once a hacker jailbreaks, Apple would be unable to block or patch out with a future software update. This iOS exploit can lead to a permanent, unblockable jailbreak on iPhones. Jailbreaking can allow hackers to get root access, enabling them to install software that is unavailable in the Apple App Store, run unsigned code, read and write to the root filesystem, and more. https://twitter.com/axi0mX/status/1178299323328499712 The researcher considers checkm8 possibly the biggest news in the iOS jailbreak community in years. This is because Bootrom jailbreaks are mostly permanent and cannot be patched. To fix it, you would need to apply physical modifications to device chipsets. This can only happen with callbacks or mass replacements.  It is also the first bootrom-level exploit publicly released for an iOS device since the iPhone 4, which was released almost a decade ago. axi0mX had also released another jailbreak-enabling exploit called alloc8 that was released in 2017. alloc8 exploits a powerful vulnerability in function malloc in the bootrom applicable to iPhone 3GS devices. However, checkm8 impacts devices starting with an iPhone 4S (A5 chip) through the iPhone 8 and iPhone X (A11 chip). The only exception being A12 processors that come in iPhone XS / XR and 11 / 11 Pro devices, for which Apple has patched the flaw. The full jailbreak with Cydia on latest iOS version is possible, but requires additional work. Explaining the reason behind this iOS exploit to be made public, @axi0mX said “a bootrom exploit for older devices makes iOS better for everyone. Jailbreakers and tweak developers will be able to jailbreak their phones on latest version, and they will not need to stay on older iOS versions waiting for a jailbreak. They will be safer.” The researcher adds, “I am releasing my exploit for free for the benefit of iOS jailbreak and security research community. Researchers and developers can use it to dump SecureROM, decrypt keybags with AES engine, and demote the device to enable JTAG. You still need additional hardware and software to use JTAG.” For now, the checkm8 exploit is released in beta and there is no actual jailbreak yet. You can’t simply download a tool, crack your device, and start downloading apps and modifications to iOS. Axi0mX's jailbreak is available on GitHub. The code isn't recommended for users without proper technical skills as it could easily result in bricked devices. Nonetheless, it is still an unpatchable issue and poses security risks for iOS users. Apple has not yet acknowledged the checkm8 iOS exploit. A number of people tweeted about this iOS exploit and tried it. https://twitter.com/FCE365/status/1177558724719853568 https://twitter.com/SparkZheng/status/1178492709863976960 https://twitter.com/dangoodin001/status/1177951602793046016 The past year saw a number of iOS exploits. Last month, Apple has accidentally reintroduced a bug in iOS 12.4 that was patched in iOS 12.3. A security researcher, who goes by the name Pwn20wnd on Twitter, released unc0ver v3.5.2, a jailbreaking tool that can jailbreak A7-A11 devices. In July, two members of the Google Project Zero team revealed about six “interactionless” security bugs that can affect iOS by exploiting the iMessage Client. Four of these bugs can execute malicious code on a remote iOS device, without any prior user interaction. Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT ‘Dropbox Paper’ leaks out email addresses and names on sharing document publicly DoorDash data breach leaks personal details of 4.9 million customers, workers, and merchants
Read more
  • 0
  • 0
  • 22002
Modal Close icon
Modal Close icon