Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - DevOps

82 Articles
article-image-azure-devops-report-how-a-bug-caused-sqlite3-for-python-to-go-missing-from-linux-images
Vincy Davis
03 Jul 2019
3 min read
Save for later

Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images

Vincy Davis
03 Jul 2019
3 min read
Yesterday, Youhana Naseim the Group Engineering Manager at Azure Pipelines provided a post-mortem of the bug, due to which a sqlite3 module in the Ubuntu 16.04 image for Python went missing from May 14th. The Azure DevOps team identified the bug on May 31st and fixed it on June 26th. Naseim apologized to all the affected customers for the delay in detecting and fixing the issue. https://twitter.com/hawl01475954/status/1134053763608530945 https://twitter.com/ProCode1/status/1134325517891411968 How Azure DevOps team detected and fixed the issue The Azure DevOps team upgraded the versions of Python, which were included in the Ubuntu 16.04 image with M151 payload. These versions of Python’s build scripts consider sqlite3 as an optional module, hence the builds were carried out successfully despite the missing sqlite3 module. Naseim says that, “While we have test coverage to check for the inclusion of several modules, we did not have coverage for sqlite3 which was the only missing module.” The issue was first reported by a user who received the M151 deployment containing the bug via the Azure Developer Community on May 20th. But the Azure support team escalated, only after receiving more reports during the M152 deployment on May 31st. The support team then proceed with the M153 deployment, after posting a workaround for the issue, as the M152 deployment would take at least 10 days. Further, due to an internal miscommunication, the support team didn’t start the M153 deployment to Ring 0 until June 13th. [box type="shadow" align="" class="" width=""]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. [/box]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. The team then resumed deployment to Ring 1 on June 17th and reached Ring 2 by June 20th. Finally, after a few failures, the team fully deployed the M153 deployment by June 26th. Azure’s future workarounds to deliver timely fixes The Azure team has set out plans to make improvements to their deployment and hotfix processes with an aim to deliver timely fixes. Their long term plan is to provide customers with the ability to choose to revert to the previous image as a quick workaround for issues introduced in new images. The detailed medium and short plans are as given below: Medium-term plans Add the ability to better compare what changed on the images to catch any unexpected discrepancies that our test suite might miss. Increase the speed and reliability of deployment process. Short term plans Build a full CI Pipeline for image generation for verifying images daily. Add test coverage for all modules in the Python standard library including sqlite3. Improving the support team's communication with the support team to escalate issues more quickly. Add telemetry, so it would be possible to detect and diagnose issues more quickly. Implement measures, which will enable reverting to prior image versions quickly and mitigate issues faster. Visit the Azure Devops status site for more details. Read More Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 22850

article-image-docker-announces-docker-desktop-enterprise
Savia Lobo
05 Dec 2018
3 min read
Save for later

Docker announces Docker Desktop Enterprise

Savia Lobo
05 Dec 2018
3 min read
Yesterday, at DockerCon Europe 2018, the Docker community announced the Docker Desktop Enterprise, an easy, fast, and a secure way to build production-ready containerized applications. Docker Desktop Enterprise Docker Desktop Enterprise is a new addition to Docker’s desktop product portfolio, which currently includes the free Docker Desktop Community products for MacOS and Windows. The Docker Desktop Enterprise version enables developers to work with the frameworks and languages they are comfortable with. It will also assist IT teams to safely configure, deploy, and manage development environments while adhering to corporate standards practices. Hence the enterprise version enables organizations to quickly move containerized applications from development to production and reduce their time to market. Features of Docker Desktop Enterprise Enterprise Manageability With Docker Desktop Enterprise, IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production. For the IT team, the Docker Desktop Enterprise is packaged as standard MSI (Win) and PKG (Mac) distribution files. These files work with existing endpoint management tools with lockable settings via policy files. This edition also provides developers with ready to code, customized and approved application templates. Enterprise Deployment & Configuration Packaging IT desktop admins can deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. Desktop administrators can also enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience. Application architects provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Increase Developer Productivity and Ship Production-ready Containerized Applications Developers can quickly use company-provided application templates that instantly replicates production-approved application configurations on the local desktop by using the configurable version packs. With these version packs, developers can now synchronize desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. No Docker CLI commands are required to get started with Configurable Version Packs. Developers can also use the Application Designer interface, template-based workflows for creating containerized applications. If one has never launched a container before, the Application Designer interface provides the foundational container artifacts and user’s organization’s skeleton code to help users get started with containers in minutes. Read more about Docker Desktop Enterprise here. Gremlin makes chaos engineering with Docker easier with new container discovery feature Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login Zeit releases Serverless Docker in beta  
Read more
  • 0
  • 0
  • 22734

article-image-gitlab-12-3-releases-with-web-application-firewall-keyboard-shortcuts-productivity-analytics-system-hooks-and-more
Amrata Joshi
23 Sep 2019
3 min read
Save for later

GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more

Amrata Joshi
23 Sep 2019
3 min read
Yesterday, the team at GitLab released GitLab 12.3, a DevOps lifecycle tool that provides a Git-repository manager. This release comes with Web Application Firewall, Productivity Analytics, new Environments section and much more. What’s new in GitLab 12.3? Web Application Firewall In GitLab 12.3, the team has shipped the first iteration of the Web Application Firewall that is built in the GitLab SDLC platform. The Web Application Firewall focuses on monitoring and reporting the security concerns related to Kubernetes clusters.  Productivity Analytics  From GitLab 12.3, the team has started releasing Productivity Analytics that will help teams and their leaders in discovering the best practices for better productivity. This release will help in drilling into the data and learning insights for improvements in future. Group level analytics workspace can be used to provide performance insight, productivity, and visibility across multiple projects. Environments section This release comes with “Environments” section in the cluster page that gives an overview of all the projects that are making use of the Kubernetes cluster. License compliance  License Compliance feature can be used to disallow a merger when a blacklisted license is found in a merge request.  Keyboard shortcuts This release comes with the new ‘n’ and ‘p’ keyboard shortcuts that can be used to move to the next and previous unresolved discussions in Merge Requests. System hooks System hooks allow automation by triggering requests whenever a variety of events in GitLab take place. Multiple IP subnets This release introduces the ability to specify multiple IP subnets so instead of specifying a single range, it is now possible for large organizations to restrict incoming traffic to their specific needs. GitLab Runner 12.3 Yesterday, the team also released GitLab Runner 12.3, an open-source project that is used for running CI/CD jobs and sending the results back to GitLab. Audit logs In this release, the audit logs for push events are disabled by default for preventing performance degradation on GitLab instances. Few GitLab users are unhappy as some of the features of this release including Productivity Analytics are available to Premium or Ultimate users only. https://twitter.com/gav_taylor/status/1175798696769916932 To know more about this news, check out the official page. Other interesting news in cloud and networking Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Istio 1.3 releases with traffic management, improved security, and more!    
Read more
  • 0
  • 0
  • 22405

article-image-docker-faces-public-outcry-as-docker-for-mac-and-windows-can-be-downloaded-only-via-docker-store-login
Melisha Dsouza
23 Aug 2018
4 min read
Save for later

Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login

Melisha Dsouza
23 Aug 2018
4 min read
5 years ago, Docker was the talk of the town because it made it possible to get a number of apps running on the same old servers and it also made packaging and shipping programs easy. But the same cannot be said about Docker now as the company is facing public disapproval on their decision to allow Docker for Mac and Windows only to be downloaded if one is logged into the Docker store. Their quest for  "improving the users experience" clearly is facing major roadblocks. Two years ago, every bug report and reasonable feature request was "hard" or "something you don't want" and would result in endless back and forth for the users. On 02 June 2016, new repository keys were pushed to the docker public repository. As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch.” The issue affected  ALL systems worldwide that were configured with the docker repository. All Debian and ubuntu versions, independent of OS and docker versions, faced the meltdown. It became impossible to run a system update or upgrade on an existing system. This 7 hours interplanetary outage because of Docker had little tech news coverage. All that was done was a few messages on a GitHub issue. You would have expected Docker to be a little bit more careful after the above controversy, but lo and behold! Here , comes yet another badly managed change implementation.. The current matter in question On June 20th 2018, github and reddit were abuzz with comments from confused Docker users on how they couldn’t download Docker for Mac or Windows without logging into the docker store. The following URLs were spotted with the problem: Install Docker for Mac and Install Docker for Windows To this, a docker spokesperson responded saying that the change was incorporated to improve the Docker for Mac and Windows experience for users moving forward. This led to string of accusations from dedicated docker users. Some of their complains included-  Source: github.com            Source: github.com    Source: github.com The issue is still ongoing and with no further statements released from the Docker team, users are left in the dark. Inspite of all the hullabaloo, why choose Docker? A report by Dzone indicates that Docker adoption by companies was up 30% in the last year. Its annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent. So what is this company doing differently? It’s no secret that Docker containers are easy to deploy in a cloud. It can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, which are some of the major languages in configuration management. Specifically, for CI/CD Docker makes it achievable to set up local development environments that are exactly like a live server. It can run multiple development environments from the same host with unique software, operating systems, and configurations. It helps to test projects on new or different servers. Allows multiple users to work on the same project with the exact same settings, regardless of the local host environment. It ensures that applications that are running on containers are completely segregated and isolated from each other. Which means you get complete control over traffic flow and management. So, what’s the verdict? Most users accused Docker’s move as manipulative since they are literally asking people to login with their information to target them with ad campaigns and spam emails to make money. However, there were also some in support of this move. Source: github.com One reddit user said that while there is no direct solution to this issue, You can use https://github.com/moby/moby/releases as a workaround, or a proper package manager if you're on Linux. Hopefully, Docker takes this as a cue before releasing any more updates that could spark public outcry. It would be interesting to see how many companies still stick around and use Docker irrespective of the rollercoaster ride that the users are put through. You can find further  opinions on this matter at reddit.com. Docker isn’t going anywhere Zeit releases Serverless Docker in beta What’s new in Docker Enterprise Edition 2.0?  
Read more
  • 0
  • 0
  • 22159

article-image-introducing-kdevops-modern-devops-framework-for-linux-kernel-development
Fatema Patrawala
20 Aug 2019
3 min read
Save for later

Introducing kdevops, a modern DevOps framework for Linux kernel development

Fatema Patrawala
20 Aug 2019
3 min read
Last Friday, Luis Chamberlain announced the release of kdevops as a Linux kernel development DevOps framework. Chamberlain wrote in his email, “the goal behind this project is to provide a modern devops framework for Linux kernel development. It is not a test suite, it is designed to use any test suites, and more importantly, it allows us to let us easily set up test environments in a jiffie. It supports different virtualization environments, and different cloud environments, and supports different Operating Systems.” kdevops is a sample framework which lets you to easily set up a testing environment for a number of different use cases. How does kdevops work? kdevops relies on Vagrant, Terraform and Ansible to get you going with your virtualization/bare metal/cloud provisioning environment. It relies heavily on public ansible galaxy roles and terraform modules. This lets the kdevops team share codes with the community and allow them to use the project as a demo framework which uses theses ansible roles and terraform modules. There are three parts to the long terms ideals for kdevops: Provisioning required virtual hosts/cloud environment Provisioning your requirements Running whatever you want Ansible will be used to get all the required ansible roles. Then Vagrant or Terraform can be used to provision hosts. Vagrant makes use of two ansible roles to setup update ~/.ssh/config and update the systems with basic development preference files, things like .git config or bashrc hacks. This last part is handled by the devconfig ansible role. Since ~/.ssh/config is updated you can then run further ansible roles manually when using Vagrant. If using Terraform for cloud environments, it updates ~/.ssh/config directly without ansible, however since access to hosts on cloud environments can vary in time running all ansible roles is expected to be done manually. What you can do with kdevops Full vagrant provisioning, including updating your ~/.ssh/config Terraform provisioning on different cloud providers Running ansible to install dependencies on debian Using ansible to clone, compile and boot into any random kernel git tree with a supplied config Updating ~/.ssh/config for terraform, first tested with the OpenStack provider, with both generic and special minicloud support. Other terraform providers just require making use of the newly published terraform module add-host-ssh-config On Hacker News, this release has gained positive reviews, but the only concern for users is if it has anything to do with devops as it appears to be an automated testing environment provision. One of them comments, “This looks cool, but I'm not sure what it has to do with devops? It just seems to be automated test environment provisioning, am I missing something?” On Reddit as well, Linux users are happy with this setup and they find it really promising, one of the comments read, “I have so much hacky scriptwork around kvm, have always been looking for a cleaner setup; this looks super promising. thank you.” To know more about this release, check out the official announcement page as well as the GitHub page. Why do IT teams need to transition from DevOps to DevSecOps? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 21866

article-image-devops-platform-for-coding-gitlab-reached-more-than-double-valuation-of-2-75-billion-than-its-last-funding-and-way-ahead-of-its-ipo-in-2020
Fatema Patrawala
19 Sep 2019
4 min read
Save for later

DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020

Fatema Patrawala
19 Sep 2019
4 min read
Yesterday, GitLab, a San Francisco based start-up, raised $268 million in a Series E funding round valuing the company at $2.75 billion, more than double of its last valuation. In the Series D round funding of $100 million the company was valued at $1.1 billion; and with today’s announcement, the valuation has more than doubled in less than a year. GitLab provides a DevOps platform for developing and collaborating on code and offers a single application for companies to draft, develop and release code. The product is used by companies like Delta Air Lines Inc., Ticketmaster Entertainment Inc. and Goldman Sachs Group Inc etc. The Series E funding round was led by investors including Adage Capital Management, Alkeon Capital, Altimeter Capital, Capital Group, Coatue Management, D1 Capital Partners, Franklin Templeton, Light Street Capital, Tiger Management Corp. and Two Sigma Investments. GitLab plans to go public in November 2020 According to Forbes, GitLab has already set November 18, 2020 as the date for going public. The company seems to be primed and ready for the eventual IPO. As for the $268 million, it gives the company considerable time ahead of the planned event and also gives the flexibility to choose how to take the company public. “One other consideration is that there are two options to go public. You can do an IPO or direct listing. We wanted to preserve the optionality of doing a direct listing next year. So if we do a direct listing, we’re not going to raise any additional money, and we wanted to make sure that this is enough in that case,” Sid Sijbrandij, Gitlab co-founder and CEO explained in an interview for TechCrunch. He further adds, that the new funds will be used to add monitoring and security to GitLab’s offering, and to increase the company’s staff to more than 1,000 employees this year from 400 employee strength currently. GitLab is able to add workers at a rapid rate, since it has an all-remote workforce. GitLab wants to be independent and chooses transparency for community Sijbrandij says that the company made a deliberate decision to be transparent early on. Being based on an open-source project, it’s sometimes tricky to make the transition to a commercial company, and sometimes that has a negative impact on the community and the number of contributions. Transparency was a way to combat that, and it seems to be working. He reports that the community contributes 200 improvements to the GitLab open-source products every month, and that’s double the amount of just a year ago, so the community is still highly active. He did not ignore the fact that Microsoft acquired GitHub last year for $7.5 billion. And GitLab is a similar kind of company that helps developers manage and distribute code in a DevOps environment. He claims in spite of that eye-popping number, his goal is to remain an independent company and take this through to the next phase. “Our ambition is to stay an independent company. And that’s why we put out the ambition early to become a listed company. That’s not totally in our control as the majority of the company is owned by investors, but as long as we’re more positive about the future than the people around us, I think we can we have a shot at not getting acquired,” he said. Community is happy with GitLab’s products and services Overall the community is happy with this news and GitLab’s products and services. One of the comments on Hacker News reads, “Congrats, GitLab team. Way to build an impressive business. When anybody tells you there are rules to venture capital — like it’s impossible to take on massive incumbents that have network effects — ignore them. The GitLab team is doing something phenomenal here. Enjoy your success! You’ve earned it.” Another user comments, “We’ve been using Gitlab for 4 years now. What got us initially was the free private repos before github had that. We are now a paying customer. Their integrated CICD is amazing. It works perfectly for all our needs and integrates really easily with AWS and GCP. Also their customer service is really damn good. If I ever have an issue, it’s dealt with so fast and with so much detail. Honestly one of the best customer service I’ve experienced. Their product is feature rich, priced right and is easy. I’m amazed at how the operate. Kudos to the team” Other interesting news in programming Microsoft open-sources its C++ Standard Library (STL) used by MSVC tool-chain and Visual Studio Linux 5.3 releases with support for AMD Navi GPUs, Zhaoxin x86 CPUs and power usage improvements NVIM v0.4.0 releases with new API functions, Lua library, UI events and more!
Read more
  • 0
  • 0
  • 21426
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-googles-kaniko-open-source-build-tool-for-docker-images-in-kubernetes
Savia Lobo
27 Apr 2018
2 min read
Save for later

Google’s kaniko - An open-source build tool for Docker Images in Kubernetes, without a root access

Savia Lobo
27 Apr 2018
2 min read
Google recently introduced kaniko, an open-source tool for building container images from a Dockerfile even without privileged root access. Prior to kaniko, building images from a standard Dockerfile typically was totally dependent on an interactive access to a Docker daemon, which requires a root access on the machine to run. Such a process makes it difficult to build container images in environments that can’t easily or securely expose their Docker daemons, such as Kubernetes clusters. To combat these challenges, Kaniko was created. With kaniko, one can build an image from a Dockerfile and push it to a registry. Since it doesn’t require any special privileges or permissions, kaniko can even run in a standard Kubernetes cluster, Google Kubernetes Engine, or in any environment that can’t have access to privileges or a Docker daemon. How does kaniko Build Tool work? kaniko runs as a container image that takes in three arguments: a Dockerfile, a build context and the name of the registry to which it should push the final image. The image is built from scratch, and contains only a static Go binary plus the configuration files needed for pushing and pulling images.kaniko image generation The kaniko executor takes care of extracting the base image file system into the root. It executes each command in order, and takes a snapshot of the file system after each command. The snapshot is created in the user area where the file system is running and compared to the previous state that is in memory. All changes in the file system are appended to the base image, making relevant changes in the metadata of the image. After successful execution of each command in the Dockerfile, the executor pushes the newly built image to the desired registry. Finally, Kaniko unpacks the filesystem, executes commands and takes snapshots of the filesystem completely in user-space within the executor image. This is how it avoids requiring privileged access on your machine. Here, the docker daemon or CLI is not involved. To know more about how to run kaniko in a Kubernetes Cluster, and in the Google Cloud Container Builder, read the documentation on the GitHub Repo. The key differences between Kubernetes and Docker Swarm Building Docker images using Dockerfiles What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 21198

article-image-amazon-eks-windows-container-support-is-now-generally-available
Savia Lobo
10 Oct 2019
2 min read
Save for later

Amazon EKS Windows Container Support is now generally available

Savia Lobo
10 Oct 2019
2 min read
A few days ago, Amazon announced the general availability of the Windows Container support on  Amazon Elastic Kubernetes Service (EKS). The company announced a preview of the Windows Container support in March this year and also invited customers to try it out and provide their feedback. With the Windows Container Support, development teams can now deploy applications designed to run on Windows Servers, on Kubernetes alongside Linux applications. It will also bring in more consistency in system logging, performance monitoring, and code deployment pipelines. “We are proud to be the first Cloud provider to have General Availability of Windows Containers on Kubernetes and look forward to customers unlocking the business benefits of Kubernetes for both their Windows and Linux workloads,” the official post mentions. A few considerations before deploying the Worker nodes include: Windows workloads are supported with Amazon EKS clusters running Kubernetes version 1.14 or later. Amazon EC2 instance types C3, C4, D2, I2, M4 (excluding m4.16xlarge), and R3 instances are not supported for Windows workloads. Host networking mode is not supported for Windows workloads. Amazon EKS clusters must contain 1 or more Linux worker nodes to run core system pods that only run on Linux, such as coredns and the VPC resource controller. The kubelet and kube-proxy event logs are redirected to the Amazon EKS Windows Event Log and are set to a 200 MB limit. In a demonstration, Martin Beeby, a principal evangelist for Amazon Web Services has created a new Amazon Elastic Kubernetes Service cluster, which works with any cluster that is using Kubernetes version 1.14 and above. He has also added some new Windows nodes and deploys a Windows application. For a complete demonstration and to know more about the Amazon EKS Windows Container Support, read AWS’ official blog post. Amazon EBS snapshots exposed publicly leaking sensitive data in hundreds of thousands, security analyst reveals at DefCon 27 Amazon is being sued for recording children’s voices through Alexa without consent Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 21167

article-image-stackrox-app-integrates-into-the-sumo-logic-dashboard-for-improved-kubernetes-security
Savia Lobo
12 Sep 2019
3 min read
Save for later

StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security

Savia Lobo
12 Sep 2019
3 min read
Today, StackRox, a company providing threat protection for containers and Kubernetes, announced the availability of the StackRox App for the Sumo Logic Continuous Intelligence Platform. The StackRox App for Sumo Logic provides customers with critical insights into misconfigurations and security events for their container and Kubernetes environments directly within their Sumo Logic Dashboard. Using this app, different security teams can view StackRox data regarding vulnerabilities, misconfigurations, runtime threats, and other policy violations within Sumo Logic and streamline their remediation efforts. John Coyle, vice president of business development for Sumo Logic, said, "We're excited to launch our Kubernetes security integration with StackRox since it will enable customers to gain unparalleled insights and operational metrics in a single dashboard to ensure their cloud-native environments are continuously protected.” "The StackRox Kubernetes-native container security platform provides unique context on misconfigurations, risk profiling, and runtime incidents that will enable our joint customers to more quickly identify and address security issues," Coyle further added. The StackRox App for Sumo Logic provides several key metrics such as vulnerabilities, runtime threats, and compliance violations across container and Kubernetes environments through the following dashboards: StackRox Overview:  This offers a snapshot of key metrics about an organization’s overall Kubernetes and container security posture StackRox Image Violations: These display information from StackRox’s image scanning and vulnerability management capabilities and prioritizes security issues in container images based on rich context derived from Kubernetes StackRox Kubernetes Violations: These highlight prioritized list of misconfigurations of Kubernetes components based on more than 70 DevOps and Security best practices StackRox Runtime Violations: These provide insights into threats and other suspicious activity at runtime based on continuous monitoring of every single container within Kubernetes environments Richard Reinders, manager of security operations for Looker, a joint StackRox and Sumo Logic customer said, “StackRox gives us a Kubernetes-centric single pane of glass view into the security posture of our multi-cloud infrastructure. Having StackRox’s unique Kubernetes security insights available directly on our Sumo Logic Dashboard provides us with a single place to view security and compliance details alongside our operational analytics for our cloud-native infrastructure. This integration also allows us to use a single, consistent, security event detection and response pipeline.” To more about the StackRox App for Sumo Logic head over to its official website. Other interesting news in security CNCF-led open-source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Over 47K Supermicro servers’ BMCs are prone to USBAnywhere, a remote virtual media vulnerability Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks
Read more
  • 0
  • 0
  • 20402

article-image-gitlab-retracts-its-privacy-invasion-policy-after-backlash-from-community
Vincy Davis
25 Oct 2019
3 min read
Save for later

GitLab retracts its privacy invasion policy after backlash from community

Vincy Davis
25 Oct 2019
3 min read
Yesterday, GitLab retracted its earlier decision to implement user level product usage tracking on their websites after receiving negative feedback from its users. https://twitter.com/gitlab/status/1187408628531322886 Two days ago, GitLab informed its users that starting from its next yet to be released version (version 12.4), there would be an addition of Javascript snippets in GitLab.com (GitLab’s SaaS offering) and GitLab's proprietary Self-Managed packages (Starter, Premium, and Ultimate) websites. These Java snippets will be used to interact with GitLab and other third-party SaaS telemetry services. Read More: GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more GitLab.com users were specifically notified that until they accept the new service terms condition, their access to the web interface and API will be blocked. This meant that users with integration to the API will experience a brief pause of service, until the new terms are accepted by signing in to the web interface. The self-managed users, on the other hand, were apprised that they can continue to use the free software GitLab Core without any changes. The DevOps coding platform says that SaaS telemetry products are important tools to understand the analytics on user behaviour inside web-based applications. According to the company, these additional user information will help in increasing their website speed and also enrich user experience. “GitLab has a lot of features, and a lot of users, and it is time that we use telemetry to get the data we need for our product managers to improve the experience,” stated the official blog. The telemetry tools will use JavaScript snippets that will be executed in the user’s browser and will send the user information back to the telemetry service. Read More: GitLab faces backlash from users over performance degradation issues tied to redis latency The company had also assured users that they will disclose all the whereabouts of the user information in the privacy policy. They also ensured that the third-party telemetry service will have data protection standards equivalent to their own standard and will also aim for their SOC2 compliance. If any user does not wish to be tracked, they can turn on the Do Not Track (DNT) mechanism in their GitLab.com or GitLab Self-Managed web browser. The DNT mechanism will not load the  the JavaScript snippet. “The only downside to this is that users may also not get the benefit of in-app messaging or guides that some third-party telemetry tools have that would require the JavaScript snippet,” added the official blog. Following this announcement, GitLab received loads of negative feedback from users. https://twitter.com/PragmaticAndy/status/1187420028653723649 https://twitter.com/Cr0ydon/status/1187380142995320834 https://twitter.com/BlindMyStare/status/1187400169303789568 https://twitter.com/TheChanceSays/status/1187095735558238208 Although, GitLab has rolled backed the Telemetry service changes for now, and are re-considering their decision, many users are warning them to drop the idea completely. https://twitter.com/atom0s/status/1187438090991751168 https://twitter.com/ry60003333/status/1187601207046524928 https://twitter.com/tresronours/status/1187543188703186949 DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 GitLab goes multicloud using Crossplane with kubectl Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim PostGIS 3.0.0 releases with raster support as a separate extension Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more
Read more
  • 0
  • 0
  • 20330
article-image-elastic-launches-helm-charts-alpha-for-faster-deployment-of-elasticsearch-and-kibana-to-kubernetes
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes

Melisha Dsouza
12 Dec 2018
3 min read
At the KubeCon+CloudNativeCon happening at Seattle this week, Elastic N.V., the pioneer behind Elasticsearch and the Elastic Stack, announced the alpha availability of Helm Charts for Elasticsearch on Kubernetes. Helm Charts will make it possible to deploy Elasticsearch and Kibana to Kubernetes almost instantly. Developers use Helm charts for its flexibility in creating, publishing and sharing Kubernetes applications. The ease of using Kubernetes to manage containerized workloads has also lead to Elastic users deploying their ElasticSearch workloads to Kubernetes. Now, with the Helm chart support provided for Elasticsearch on Kubernetes, developers can harness the benefits of both, Helm charts and Kubernetes, to instal, configure, upgrade and run their applications on Kubernetes. With this new functionality in place, users can now take advantage of the best practices and templates to deploy Elasticsearch and Kibana. They will obtain access to some basic free features like monitoring, Kibana Canvas and spaces. According to the blog post, Helm charts will serve as a “ way to help enable Elastic users to run the Elastic Stack using modern, cloud-native deployment models and technologies.” Why should developers consider Helm charts? Helm charts have been known to provide users with the ability to leverage Kubernetes packages through the click of a button or single CLI command. Kubernetes is sometimes complex to use, thus impairing developer productivity. Helm charts improve their productivity as follows: With helm charts, developers can focus on developing applications rather than  deploying dev-test environments. They can author their own chart, which in turn automates deployment of their dev-test environment It comes with a “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Combating the complexity related of deploying a Kubernetes-orchestrated container application, Helm Charts allows software vendors and developers to preconfigure their applications with sensible defaults. This enables users/deployers to change parameters of the application/chart using a consistent interface. Developers can incorporate production-ready packages while building applications in a Kubernetes environment thus eliminating deployment errors due to incorrect configuration file entries or mangled deployment recipes. Deploying and maintaining Kubernetes applications can be tedious and error prone. Helm Charts reduces the complexity of maintaining an App Catalog in a Kubernetes environment. Central App Catalog reduces duplication of charts (when shared within or between organizations) and spreads best practices by encoding them into Charts. To know more about Helm charts, check out the README files for the Elasticsearch and Kibana charts available on GitHub. In addition to this announcement, Elastic also announced its collaboration with Cloud Native Computing Foundation (CNCF) to promote and support open cloud native technologies and companies. This is another step towards Elastic’s mission towards building products in an open and transparent way. You can head over to Elastic’s official blog for an in-depth coverage of this news. Alternatively, check out MarketWatch for more insights on this article. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support How to perform Numeric Metric Aggregations with Elasticsearch
Read more
  • 0
  • 0
  • 20011

article-image-atlassian-acquires-opsgenie-launches-jira-ops-to-make-incident-response-more-powerful
Bhagyashree R
05 Sep 2018
2 min read
Save for later

Atlassian acquires OpsGenie, launches Jira Ops to make incident response more powerful

Bhagyashree R
05 Sep 2018
2 min read
Yesterday, Atlassian made two major announcements, the acquisition of OpsGenie and the release of Jira Ops. Both these products aim to help IT operations teams resolve downtime quickly and reduce the occurrence of these incidents over time. Atlassian is an Australian enterprise software company that develops collaboration software for teams with products including JIRA, Confluence, HipChat, Bitbucket, and Stash. OpsGenie: Alert the right people at the right time Source: Atlassian OpsGenie is an IT alert and notification management tool that helps notify critical alerts to all the right people (operations and software development teams). It uses a sophisticated combination of scheduling, escalation paths, and notifications that take things like time zone and holidays into account. OpsGenie is a prompt and reliable alerting system, which comes with the following features: It is integrated with monitoring, ticketing, and chat tools, to notify the team using multiple channels, providing the necessary information for your team to immediately begin resolution. It provides various notification methods such as, email, SMS, push, phone call, and group chat to ensure alerts are seen by the users. You can build and modify schedules and define escalation rules within one interface. It tracks everything related to alerts and incidents, which helps you to gain insight into areas of success and opportunities for improvement. You can define escalation policies and on-call schedules with rotations to notify the right people and escalate when necessary. Jira Ops: Resolve incidents faster Source: Atlassian Jira Ops is an unified incident command center that provides the response team with a single place for response coordination. It is integrated with OpsGenie, Slack, Statuspage, PagerDuty, and xMatters. It guides the response team through the response workflow and automates common steps such as creating a new Slack room for each incident. Jira Ops is available through Atlassian’s early access program. Jira Ops enables you to resolve a downtime quickly by providing the following functionalities: It quickly alerts you about what is affected and what the associated impacts are. You can check the status, severity level, and duration of the incident. You can see real-time response activities. You can also find the associated Slack channel, current incident manager, and technical lead. You can find more details on OpsGenie and Jira Ops on Atlassian’s official website. Atlassian sells Hipchat IP to Slack Atlassian open sources Escalator, a Kubernetes autoscaler project Docker isn’t going anywhere
Read more
  • 0
  • 0
  • 19921

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 19876
article-image-why-its-time-for-site-reliability-engineering-to-shift-left-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

Why It’s Time for Site Reliability Engineering to Shift Left from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
By adopting a multilevel approach to site reliability engineering and arming your team with the right tools, you can unleash benefits that impact the entire service-delivery continuum In today’s application-driven economy, the infrastructure supporting business-critical applications has never been more important. In response, many companies are recruiting site reliability engineering (SRE) specialists to help them […] The post Why It’s Time for Site Reliability Engineering to Shift Left appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 19806

article-image-servicenow-partners-with-ibm-on-aiops-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

ServiceNow Partners with IBM on AIOps from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
ServiceNow and IBM this week announced that the Watson artificial intelligence for IT operations (AIOps) platform from IBM will be integrated with the IT service management (ITSM) platform from ServiceNow. Pablo Stern, senior vice president for IT workflow products for ServiceNow, said once that capability becomes available later this year on the Now platform, IT […] The post ServiceNow Partners with IBM on AIOps appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 19734
Modal Close icon
Modal Close icon