Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Author Posts

123 Articles
article-image-instead-of-data-scientists-working-on-their-models-and-advancing-ai-they-are-spending-their-time-doing-deepops-work-missinglink-ceo-yogi-taguri-interview
Amey Varangaonkar
08 Nov 2018
10 min read
Save for later

“Instead of data scientists working on their models and advancing AI, they are spending their time doing DeepOps work”, MissingLink CEO, Yosi Taguri [Interview]

Amey Varangaonkar
08 Nov 2018
10 min read
Machine learning has shown immense promise across domains and industries over the recent years. From helping with the diagnosis of serious ailments to powering autonomous vehicles, machine learning is finding useful applications across a spectrum of industries. However, the actual process of delivering business outcomes using machine learning currently takes too long and is too expensive, forcing some businesses to look for other less burdensome alternatives. MissingLink.ai is a recently-launched platform to fix just this problem. It enables data scientists to spend less time on the grunt work by automating and streamlining the entire machine learning cycle, giving them more time to apply actionable insights gleaned from the data. Key Takeaways Processing and managing the sheer volume of data is one of the key challenges that today’s AI tools face Yosi thinks the idea of companies creating their own machine learning infrastructure doesn’t make a lot of sense. Data professionals should be focusing on more important problems within their organizations by letting the platform take care of the grunt work. MissingLink.ai is an innovative AI platform born out of the need to simplify AI development, by taking away the common, menial data processing tasks from data scientists and allowing them to focus on the bigger data-related issues; through experiment management, data management and resource management. MissingLink is a part of the Samsung NEXT product development team that aims to help businesses automate and accelerate their projects using machine learning We had the privilege of interviewing Mr. Yosi Taguri, the founder and CEO of MissingLink, to know more about the platform and how it enables more effective deep learning. What are the biggest challenges that companies come across when trying to implement a robust Machine Learning/Deep Learning pipeline in their organization? How does it affect their business? The biggest challenge, simply put, is that today’s AI tools can’t keep up with the amount of data being produced. And it’s only going to get more challenging from here! As datasets continue to grow, they will require more and more compute power, which means we risk falling farther behind unless we change the tools we’re using. While everyone is talking about the promise of machine learning, the truth is that today, assessing data is still too time-consuming and too expensive. Engineers are spending all their time managing the sheer volume of data, rather than actually learning from it and being empowered to make changes. Let’s talk about MissingLink.ai, the platform you and your team have launched for accelerating deep learning across businesses. Why the name MissingLink? What was the motivation to launch this platform? The name is actually a funny story, and it ties pretty neatly into why we created the platform. When we were starting out three years ago, deep learning was still a relatively new concept and my team and I were working hard to master the intricacies of it. As engineers, we primarily worked with code, so to be able to solve problems with data was a fascinating new challenge for us. We quickly realized that deep learning is really hard and moves very, very slow. So we set out to solve that problem of how to build really smart machines really fast. By comparison, we thought of it through the lens of software development. Our goal was to accelerate from a glacial pace to building machine learning algorithms faster -- because we felt that there was something missing, a missing link if you will. MissingLink is a part of the growing Samsung NEXT product development team. How does it feel? What role do you think MissingLink will play in Samsung NEXT’s plans and vision going forward? Samsung NEXT’s broader mission is to help startups reach their full potential and achieve their goals. More specifically, Samsung NEXT discovers and backs the engineers, innovators, builders, and entrepreneurs who will help Samsung define the future of software and services. The Samsung NEXT product development team is focused on building software and services that take advantage of and accelerate opportunities related to some of the biggest shifts in technology including automation, supply and demand, and interfaces. This will require hardware and software to seamlessly come together. Over the past few years, nearly all startups are leveraging AI for some component of their business, yet practical progress has been slower than promised. MissingLink is a foundational tool to enable the progress of these big changes, helping entrepreneurs with great use cases for machine learning to accelerate their projects. Could you give us the key features of Missinglink.ai that make it stand out from the other AI platforms available out there? How will it help data scientists and ML engineers build robust, efficient machine learning models? First off, MissingLink.ai is the most comprehensive AI platform out there. It handles the entire deep learning lifecycle and all its elements, including code, data, experiments, and resources. I’d say that our top features include: Experiment Management: See and compare the entire history of experiments. MissingLink.ai auto-documents every aspect Data Management: A unique data store tracks data versions used in every experiment, streams data, caches it locally and only syncs changes Resources Management: Manages your resources with no extra infrastructure costs using your AWS or other cloud credentials. It grows and shrinks your cloud resources as needed. These features, together with our intuitive interface, really put data scientists and engineers in the driver's seat when creating AI models. Now they can have more control and spend less energy repeating experiments, giving them more time to focus on what is important. Your press release on the release of MissingLink states “the actual process of delivering business outcomes currently takes too long and it is too expensive. MissingLink.ai was born out of a desire to fix that.” Could you please elaborate on how MissingLink makes deep learning less expensive and more accessible? Companies are currently spending too much time and devoting too many resources to the menial tasks that are necessary for building machine learning models. The more time data scientists spend on tasks like spinning machines, copying files and DevOps, the more money that a company is wasting. MissingLink changes that through the introduction of something we’re calling DeepOps or deep learning operations, which allows data scientists to focus on data science and let the machine take care of the rest. It’s like DevOps where the role is about how to make the process of software development more efficient and productionalized, but the difference is no one has been filling this role and it’s different enough that its very specific to the task of deep learning. Today, instead of data scientists working on their models and advancing AI, they are spending their time doing this DeepOps work. MissingLink reduces load time and facilitates easy data exploration by eliminating the need to copy files through data-management in a version-aware data store. Most of the businesses are moving their operations on to the cloud these days, with AWS, Azure, GCP, etc. being their preferred cloud solutions. These platforms have sophisticated AI offerings of their own. Do you see AI platforms such as MissingLink.ai as a competition to these vendors, or can the two work collaboratively? I wouldn’t call cloud companies our competitors; we don’t provide the cloud services they do, and they don’t provide the DeepOps service that we do. Yes, we all are trying to simplify AI, but we’re going about it in very different ways. We can actually use a customer’s public cloud provider as the infrastructure to run the MissingLink.ai platform. If customers provide us with their cloud credentials, we can even manage this for them directly. Concepts such as Reinforcement Learning and Deep Learning for Mobile are getting a lot of traction these days, and have moved out of the research phase into the application/implementation phase. Soon, they might start finding extensive business applications as well. Are there plans to incorporate these tools and techniques in the platform in the near future? We support all forms of Deep Learning, including Reinforcement Learning. On the Deep Learning for Mobile side, we think the Edge is going to be a big thing as more and more developers around the world are exposed to Deep Learning. We plan to support it early next year. Currently, data privacy and AI ethics have become a focal point of every company’s AI strategy. We see tech conglomerates increasingly coming under the scanner for ignoring these key aspects in their products and services. This is giving rise to an alternate movement in AI, with privacy and ethics-centric projects like Deon, Vivaldi, and Tim Berners-Lee’s Solid. How does MissingLink approach the topics of privacy, user consent, and AI ethics? Are there processes/tools or principles in place in the MissingLink ecosystem or development teams that balance these concerns? When we started MissingLink we understood that data is the most sensitive part of Deep Learning. It is the new IP. Companies spend 80% of their time attending to data, refining it, tagging it and storing it, and therefore are reluctant to upload it to a 3rd party vendor. We have built MissingLink with that in mind - our solution allows customers to simply point us in the direction of where their data is stored internally, and without moving it or having to access it as a SaaS solution we are able to help them manage it by enabling version management as they do with code. Then we can stream it directly to the machines that need the data for processing and document which data was used for reproducibility. Finally, where do you see machine learning and deep learning heading in the near future? Do you foresee a change in the way data professionals work today? How will platforms like MissingLink.ai change the current trend of working? Right now, companies are creating their own machine learning infrastructure - and that doesn’t make sense. Data professionals can and should be focusing on more important problems within their organizations. Platforms like MissingLink.ai free data scientists from the grunt work it takes to upkeep the infrastructure, so they can focus on bigger picture issues. This is the work that is not only more rewarding for engineers to work on, but also creates the unique value that companies need to compete.  We’re excited to help empower more data professionals to focus on the work that actually matters. It was wonderful talking to you, and this was a very insightful discussion. Thanks a lot for your time, and all the best with MissingLink.ai! Read more Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Tesseract version 4.0 releases with new LSTM based engine, and an updated build system Baidu releases a new AI translation system, STACL, that can do simultaneous interpretation
Read more
  • 0
  • 0
  • 25226

article-image-imran-bashir-on-the-fundamentals-of-blockchain-its-myths-and-an-ideal-path-for-beginners
Expert Network
15 Feb 2021
5 min read
Save for later

Imran Bashir on the Fundamentals of Blockchain, its Myths, and an Ideal Path for Beginners

Expert Network
15 Feb 2021
5 min read
With the invention of Bitcoin in 2008, the world was introduced to a new concept, Blockchain, which revolutionized the whole of society. It was something that promised to have an impact upon every industry. This new concept is the underlying technology that underpins Bitcoin.  Blockchain technology is the backbone of cryptocurrencies, and it has applications in finance, government, media, and many other industries.   Some describe blockchain as a revolution, whereas another school of thought believes that it is going to be more evolutionary, and it will take many years before any practical benefits of blockchain reach fruition. This thinking is correct to some extent, but, in Imran Bashir’s opinion, the revolution has already begun. It is a technology that has an impact on current technologies too and possesses the ability to change them at a fundamental level.  Let’s hear from Imran on fundamentals of blockchain technology, its myths and his recent book, Mastering Blockchain, Third Edition. What is blockchain technology? How would you describe it to a beginner in the field? Blockchain is a distributed ledger which runs on a decentralized peer to peer network. First introduced with Bitcoin as a mechanism that ensures security of the electronic cash system, blockchain has now become a prime area of research with many applications in a variety of industries and sectors.   What should be the starting point for someone aiming to begin their journey in Blockchain? Focus on the underlying principles and core concepts such as distributed systems, consensus, cryptography, and development using no helper tools in the start. Once you understand the basics and the underlying mechanics, then you can use tools such as truffle or some other framework to make your developer life easier, however it is extremely important to learn the underlying concepts first.   What is the biggest myth about blockchain? Sometimes people believe that blockchain IS cryptocurrency, however that is not the case. Blockchain is the underlying technology behind cryptocurrencies that ensures the security, and integrity of the system and prevents double spends. However, cryptocurrency can be considered one application of blockchain technology out of many.      “Blockchain is one of the most disruptive emerging technologies today.” How much do you agree with this? Indeed, it is true.  Blockchain is changing the way we do business. In the next 5 years or so, financial systems, government systems and other major sectors will all have blockchain integrated in one way or another.   What are the factors driving development of the mainstream adoption of Blockchain? The development of standards, interoperability efforts, and consortium blockchain are all contributing towards mainstream adoption of blockchain. Also demand for more security, transparency, and decentralization in some sectors are also key drivers behind more adoption, e.g., a prime solution for decentralized sovereign identity is blockchain.   How do you explain the term bitcoin mining? Mining is a colloquial term used to describe the process of creating new bitcoins where a miner repeatedly tries to find a solution to a math puzzle and whoever finds it first wins the right to create new block and earn bitcoins as a reward.    How can Blockchain protect the Global economy? I think with the trust, transparency and security guarantees provided by blockchain we can perceive a future where financial crime can be limited to a great degree. That can have a good impact on the global economy. Furthermore, the development of CDBCs (central bank digital currencies) are expected to have a major impact on the economy and help to stabilize it. From an inclusion point of view, blockchain can allow unbanked populations to play a role in the global financial system. If cryptocurrencies replace the current monetary system, then because of the decentralized nature of blockchain, major cost savings can be achieved as no intermediaries or banks will be required, and a peer to peer, extremely low cost, global financial system can emerge which can transform the world economy. The entire remittance ecosystem can evolve into an extremely low cost, secure, real-time system which can include people who were porously unbanked. The possibilities are endless.   Tell us a bit about your book, Mastering Blockchain, Third Edition? Mastering Blockchain, Third Edition is a unique combination of theory and practice. Not only does it provides a holistic view of most areas of blockchain technology, it also covers hands on exercises using Ethereum, Bitcoin, Quroum and Hyperledger to equip readers with both theory and practical knowledge of blockchain technology. The third edition includes four new chapters on hot topics such as blockchain consensus, tokenization, Ethereum 2 and Enterprise blockchains.  About the author  Imran Bashir has an M.Sc. in Information Security from Royal Holloway, University of London, and has a background in software development, solution architecture, infrastructure management, and IT service management. He is also a member of the Institute of Electrical and Electronics Engineers (IEEE) and the British Computer Society (BCS). Imran has extensive experience in both the public and financial sectors, having worked on large-scale IT projects in the public sector before moving to the financial services industry. Since then, he has worked in various technical roles for different financial companies in Europe's financial capital, London. 
Read more
  • 0
  • 0
  • 25174

article-image-minko-gechev-developers-should-learn-all-front-end-frameworks-to-go-to-the-next-level
Richard Gall
04 Jun 2018
6 min read
Save for later

Minko Gechev: "Developers should learn all major front end frameworks to go to the next level"

Richard Gall
04 Jun 2018
6 min read
This year's Skill Up survey produced some interesting results when it came to the best front end frameworks. Angular remains the most established tool with 40% of web developers reporting that they used it regularly. React is actually a little further behind, with 25% using it regularly. Similarly, Vue.js is growing but used by 20% of respondents. However, opinion was a little different when we asked what front end frameworks should win the battle of the 3 big front end tools. Respondents were split on Angular and React, with both JavaScript tools winning 34% of the vote. Vue wasn't far behind, at just over 30%. With the web development world apparently split over what framework is going to define the future of the field, how are we to pick them apart? Or do we even really need to worry? Read the report in full. Sign up to our weekly newsletter and download the PDF for free. The fact that we have three great front end tools jostling for developer attention is surely a good thing, right? To help us make sense of these trends, we caught up with Angular expert Minko Gechev to find out what he makes of web development in 2018, and what front end developers should be learning. Minko Gechev is the author of Switching Angular. You can find the latest edition here on the Packt Store. Which front end framework should you learn: Angular, React, or Vue? Respondents to the Skill Up survey were evenly split between Angular, React, and Vue in the 'battle of the frameworks'. Which do you think developers should learn, and why? In all of them, there are unique and interesting ideas which are worth exploring. I truly believe that learning all the major frameworks can help developers go to the next level! This doesn’t necessary mean to be proficient in all of them. Having a high-level understanding of how the frameworks work and how to use them is completely enough and will allow you to adapt according to the projects’ requirements. This is similar to learning programming languages from different paradigms – it helps you discover how problems are being solved in different ways. For the past a couple of years, the redux pattern became the de facto standard for state management in the modern front-end development. The good thing about redux is that it’s view agnostic so you can use it with any framework – Vue, React, Angular, etc. Angular has its own redux alternative called ngrx which empowers a declarative approach with RxJS but in general, it follows the same underlying pattern. My recommendation would be to understand how to manage the state of our applications because that’s probably the most complex problem that we’re solving in our day to day development process. Once we have a solid understanding of this, we can easily switch between different frameworks depending on the problems we’re solving, what the rest of the team is using, and the project’s requirements. A very interesting characteristic of learning Angular is that if we get comfortable with the framework we’d be also familiar with TypeScript, RxJS, and techniques such as dependency injection. This may look like an initial overhead but it’s a great long-term investment which pays off really well in large projects. How important is TypeScript to front end development? How important is TypeScript to modern web development? Why? Over the past a couple of years I see a strong increase in the excitement around the language, not only in the Angular world but also in React and Vue. I’m personally using TypeScript for a few projects – a platform that we built with React and an educational application written in Angular. I see a lot of value in using TypeScript. Recently I haven’t started any project with JavaScript – for everything new I’m using TypeScript and I’m trying to migrate, as many of my existing projects as possible.There are a few reasons for this: TypeScript provides great development experience! Especially, combined with VSCode, you can instantly notice when you’ve misspelled a property, method, you’re trying to access a property of a nullable value, etc. It gives you a sense of security that your program is correct to given extent. Of course, TypeScript cannot save us from logical mistakes but if we use its type system wisely, we can get great benefits. You might be curious what benefits? Well, TypeScript can help us reduce the number of bugs in our programs. In the study “To Type or Not to Type: Quantifying Detectable Bugs in JavaScript” the authors shown that the average JavaScript program can benefit with 15% bug reduction if it uses the type system of TypeScript. The study was using TypeScript version 2.0; with the latest features of the language the number of detectable bugs is growing dramatically. Since recently webpack is leveraging TypeScript as well because it helps discover already existing issues in the codebase. Web developers and JavaScript fatigue Do you think we're past web developers experiencing 'JavaScript fatigue'? JavaScript is very dynamic and it moves very quickly. There are a lot of potential issues which could be caused by a variety of reasons. With semantic versioning and powerful type systems (such as the type system of TypeScript), we’re walking in the right direction but we definitely have a long way to go until the ecosystem matures. Web development over the next 12 months: WebAssembly and machine learning What do you think will be the most important thing for developers to learn in the next 12 months? There are a lot of exciting things happening nowadays! Web browsers are getting more and more powerful, exposing hundreds of APIs and opportunities. WebAssembly is moving very quickly and I believe that together with Rust it has a lot of potential in future. On the other hand, Google recently announced TensorFlow.js. This is a library which allows us to use machine learning (ML) in the browser. In the next years, ML is going to take a larger portion of our development process (directly or indirectly) for: Implementing features in our applications Improving development process I’m specifically interested in the second point – improving our development process by using ML. Together with Addy Osmani, Kyle Mathews, and Katie Hempenius, we’ve been working on a toolkit called Guess.js. It aims to provide predictive bundling and pre-fetching based on ML techniques in order to let us develop faster Angular/React/Vue/etc. applications. I’m really excited about what’s coming up in near future! So are we! Thanks for taking the time to speak to us, Minko!
Read more
  • 0
  • 0
  • 24797

article-image-listen-we-discuss-what-it-means-to-be-a-hacker-with-adrian-pruteanu-podcast
Richard Gall
26 Apr 2019
2 min read
Save for later

Listen: We discuss what it means to be a hacker with Adrian Pruteanu [Podcast]

Richard Gall
26 Apr 2019
2 min read
With numerous high profile security breaches in recent years, cybersecurity feels like a particularly urgent issue. But while the media - and, indeed, the wider world - loves stories of modern vulnerabilities and mischievous hackers, there's often very little attention paid to what causes insecurity and what can practically be done to solve such problems. To get a better understanding of cybersecurity in 2019, we spoke to Adrian Pruteanu, consultant and self-identifying hacker. He told us about what he actually does as a security consultant, what it's like working with in-house engineering teams, and how red team/blue team projects work in practice. Adrian is the author of Becoming the Hacker, a book that details everything you need to know to properly test your software using the latest pentesting techniques.          What does it really mean to be a hacker? In this podcast episode, we covered a diverse range of topics, all of which help to uncover the reality of working as a pentester. What it means to be a hacker - and how it's misrepresented in the media The biggest cybersecurity challenges in 2019 How a cybersecurity consultant actually works The most important skills needed to work in cybersecurity The difficulties people pose when it comes to security Listen here: https://soundcloud.com/packt-podcasts/a-hacker-is-somebody-driven-by-curiosity-adrian-pruteanu-on-cybersecurity-and-pentesting-tactics
Read more
  • 0
  • 0
  • 24731

article-image-how-gremlin-is-making-chaos-engineering-accessible-interview
Richard Gall
14 Jun 2018
10 min read
Save for later

How Gremlin is making chaos engineering accessible [Interview]

Richard Gall
14 Jun 2018
10 min read
Despite considerable hype, chaos engineering doesn’t appear to have yet completely captured the imagination of the wider software engineering world. According to this year’s Skill Up survey, when asked, only 13% of developers said they were excited about it. But that doesn’t mean we should disregard - far from it. Like many of the best trends, it might blow up when we least expect. It might find its way onto your CTOs eyes in just a few months. As site reliability engineering grows as a discipline, and as businesses start to put a value on downtime, chaos engineering is likely to become a big part of the reliability and resilience toolkit. Gremlin, chaos engineering, and the end of the age of downtime “People are expected to always be up” says Matt Fornaciari, co-founder and CTO of Gremlin, a product that offers “failure as a service” to businesses. I spoke to Fornaciari last month to get a deeper insight on Gremlin and the team and ideas behind it. He believes the world has changed in recent years, and the days of service windows when sites would just be taken down for an hour or two for an update or change is over: “that’s unacceptable to people these days.” Fornaciari isn’t an unbiased observer, of course. The success of Gremlin depends on chaos engineering’s adoption and acceptance. However, he’s not going out on a limb; there’s clear VC interest in Gremlin. At the end of 2017 the company received their first round of funding - more than 7 million USD. It’s a cliche but money does talk - and in this instance it seems to be saying that this approach might change the way we think about building our software. Arguably, chaos engineering - and by extension Gremlin - is a response to other trends in software. “I’ve seen a lot of signals that this is the way the world’s going”, Fornaciari says. He’s referring here to broader trends like cloud and microservices. He explains that because microservices is all about modularity, and breaking aspects of your software infrastructure into smaller pieces “you end up with nodes in this network” which “adds network complexity.” Consequently, this additional complexity means there is more that can go wrong - it becomes more unreliable. Gremlin’s bid to democratize chaos engineering It’s important to note here that chaos engineering has been around for some time - it’s not a radically new methodology. But it’s largely been locked away in some of the world’s biggest tech companies, like Netflix and Amazon. Many of Gremlin’s leaders actually worked at those companies - Fornaciari has worked at Salesforce and Amazon, for example. “The main goal was to democratize chaos engineering… we’ve [the Gremlin team] done it at the bigger companies and we’re like you know what, everyone can benefit from this”. That is the essential point around chaos engineering. If it’s going to catch on in the mainstream tech world, it needs to be more accessible to different businesses. Fornaciari explains that many of Gremlin’s customers are larger organizations. These are companies for whom downtime is of utmost importance, where a site outage that lasts just an hour could cost thousands of dollars. That said, from a cultural perspective, many organizations find it difficult to adopt this sort of mindset. “Proving the value of something that doesn’t happen,” Fornaciari says, is one of the biggest challenges for Gremlin. This is particularly true when selling their tool. Pager pain: How Gremlin sells chaos engineering to customers This is how Gremlin does it: “We have three qualifying questions: do you measure your downtime? Do you have somebody who’s responsible for downtime? And do you actually have a dollar amount tied to it?” Presumably, for many organizations at least one answer to these questions is “no”. That’s why customer support is so important for Gremlin. “Customer success and developer advocacy are two of our biggest initiatives… I’ve told people as we’re recruiting them that half of our goal as a company is to educate people.” Gremlin’s challenges as a product and as a business reflect the wider difficulties of managing upwards. The tension between those ‘on the ground’ and those at a more senior and managerial level is one that Gremlin is acutely aware of. This is where a lot of push back comes from, Fornaciari explains: What we’ve seen so far is just push back from top down - like, why do we need this? We use the term pager pain to define the engineer on call - the closer you are to the ground the closer you are to the on call rotation and the more you feel those pains and the more you believe in this but as you raise up a couple of levels you maybe don’t feel that as much… if you don’t have that measure on uptime - unless someone is on the hook for that at a higher level there’s oftentimes a why do we need this, why are we going to spend money on breaking things. Pager pain is a nice concept - it captures the tension between different layers of management. It highlights the conflict between ‘what do we need?’ and ‘what can we do?’ Read next: Blockchain can solve tech's trust issues  Safety, simplicity and security To successfully sell Gremlin, the way the product is designed is everything. For that reason, the Gremlin team have three tenets built into their product: safety, security, simplicity. When you’ve got a “potentially dangerous tool,” as Fornaciari himself describes it, making sure things are safe and secure is absolutely essential. Arguably, the fact that chaos engineering is so hard to do well might be something that Gremlin can use to its advantage. “One thing we hear when we talk to companies about it is ‘well we’ll go build this ourselves’ and the fact is it’s a really hard thing to do, and a hard thing to do well.” Gremlin is walking on a bit of a tightrope. On the one hand chaos engineering is for everyone, but on the other it’s difficult and dangerous. It should be accessible, but not too accessible. “One of the reasons we don’t have a free offering is because we are a little worried about protecting our customers not doing any harm to people… I mean, this is essentially giving somebody a potentially dangerous tool.. If they’re not given the proper education then that could be a problem, right?” Gremlin aren’t the only chaos engineering product out there. As with any trend, there are plenty of software platforms and tools emerging for technologically forward thinking businesses. Fornaciari doesn’t see these as a threat - he’s confident, bullish even, about Gremlin’s place in the market. “There are a lot of tools out there that people can go and use but they really lack the safety and simplicity.” Alongside its philosophy of safety, security and simplicity, a big selling point, according to Fornaciari, is the experience and expertise that is built into Gremlin’s DNA. “We’ve got fifteen years of combined expertise in this space” he says. “Being the experts on it and having built it 3 or 4 times already in different big companies, it sort of gave us this leg up to go out there in the world.” But while Fornaciari is eager to assert Gremlin’s knowledge, there’s no trace of elitism - sharing knowledge is a core part of the product offering. “We actually built out customer success tooling so we can see if particular attacks fail for them we can actually proactively reach out and be like ‘hey we saw you were trying to do this, maybe you meant to do this’” Fornaciari explains. Controlled chaos: chaos engineering and the scientific method Control is central to Gremlin’s philosophy - it’s a combination of the team’s commitment to safety, security and simplicity. In fact, this element of control that distinguishes chaos engineering today, from what went before. Central to Gremlin’s mission to make chaos engineering accessible, is also redefining how it’s done. “If you’re familiar with the netflix chaos monkey mentality of randomly terminating services, well that’s a good start, but safety is really lacking. We talked more about this controlled chaos… this idea that you start fairly small with this small blast radius and then as you become more confident you grow it out and grow it out as opposed to just like ‘cool, let’s just chuck a grenade in here and see what happens.’” Fornaciari goes on to describe this ‘controlled chaos’ in a surprising way. “It’s much more like the scientific method actually. Applying that method to your infrastructure and your reliability in general.” This approach is essential if you’re going to do chaos engineering well. How to do chaos engineering effectively When I ask Fornaciari how engineering teams and businesses can do chaos engineering well he emphasizes the importance of starting with a hypothesis: “You need to have a hypothesis that you’re trying to prove.Throwing random chaos at something is fine - it’ll sort of surface some of the unknown unknowns for you. But really having a hypothesis that you’re trying to prove is the best way to get value out of this [chaos engineering].” If you’re going to take a scientific approach to testing your infrastructure using ‘chaos experiments’, managing scale is also incredibly important. Don’t run before you can walk is the message. “Keep it very small initially, then you start to grow the blast radius. You definitely want to make sure that you’re starting off with the smallest modicum that you can.” Given the potential dangers of throwing metaphorical gremlins into your system, starting where your comfortable makes a lot of sense. “Start in staging, start where your comfortable, build your confidence. Make sure your system behaves well in front of non-customer facing traffic before you go out to the world.” That said, Gremlin have had “some pretty bold customers” who go straight ahead and start running chaos experiments in production. “That was cool. It’s a little scary, but they were confident and they’ve been using Gremlin as part of their system ever since.” Chaos engineering requires confidence and control Ultimately, if chaos engineering is going to take off - as Fornaciari believes it will - engineers will need to be incredibly confident. That’s true on a number of levels. You need confidence that you’ll be able to handle a range of experiments and deploy them wisely. But you’ll also need confidence that you can manage the expectations of those in senior management. It’s not hard to see the value of chaos engineering. As Fornaciari says “if you prevent one outage one time, you’ve saved that money to pay for the tool to make sure it doesn’t happen again.” But it might be hard to find time for it. It might be hard to get buy in and investment in the tools you need to do it. Gremlin are certainly going to play an important part in helping engineers do that. But one of its biggest challenges - and perhaps one of its most noble missions too - is transforming a culture where people don’t really appreciate ‘pager pain’. If Fornaciari and Gremlin can help solve that, good luck to them. You can follow Matt Fornaciari on Twitter: @callmeforni
Read more
  • 0
  • 0
  • 24308

article-image-cybersecurity-researcher-elliot-alderson-talks-trump-and-facebook-google-and-huawei-and-teaching-kids-online-privacy-podcast
Richard Gall
08 Aug 2019
3 min read
Save for later

Cybersecurity researcher "Elliot Alderson" talks Trump and Facebook, Google and Huawei, and teaching kids online privacy [Podcast]

Richard Gall
08 Aug 2019
3 min read
For anyone that's watched Mr. Robot, the name Elliot Alderson will sound familiar. However, we're not talking about Rami Malek's hacker alter ego - instead, the name has been adopted as an alias by a real-life white-hat hacker who has been digging into the dark corners of the wild and often insecure web. Elliot's real name is Baptiste Robert (whisper it...) - he was kind enough to let us peak beneath the pseudonym, and spoke to us about his work as a cybersecurity researcher and what he sees as the biggest challenges in software security today. Listen: https://soundcloud.com/packt-podcasts/cybersecurity-researcher-elliot-alderson-on-fighting-the-good-fight-online "Elliot Alderson" on cybersecurity, politics, and regulation In the episode we discuss a huge range of topics, including: Security and global politics Is it evolving the type of politics we have? Is it eroding trust in established institutions? Google’s decision to remove its apps from Huawei devices The role of states and the role of corporations Who is accountable? Who should we trust? Regulation Technological solutions What Elliot Alderson has to say on the podcast episode... On Donald Trump's use of Facebook in the 2016 presidential election: “We saw that social networks have an impact on elections. Donald Trump was able to win the election because of Facebook - because he was very aggressive on Facebook and able to target a lot of people…”  On foreign interference in national elections: “We saw, also, that these tools… have been used by countries… in order to manipulate the elections of another country. So as a technician, as a security researcher, as an infosec professional, you need to ask yourself what is happening - can we do something against that? Can we create some tool? Can we fight this phenomenon?” How technology professionals and governing institutions should work together: “We should be together. This is the responsibility of government and countries to find vulnerabilities and to ensure the security of products used by its citizens - but it’s also the responsibility of infosec professionals and we need to work closely with governments to be sure that nobody abuses vulnerabilities out there…” On teaching the younger generation about privacy and protecting your data online: “I think government and countries should teach young people the value of personal data… personally, as a dad, this is something I’m trying to teach my kids - and say okay, this website is asking you your personal address, your personal number, but do they need it? ...In a lot of cases the answer is quite obvious: no, they don’t need it.” On Google banning Huawei: “My issue with the Huawei story and the Huawei ban is that as a user, as a citizen, we are only seeing the consequences. Okay, Google ban Huawei - Huawei is not able to use Google services. But we don’t have the technical information behind that.” On the the importance of engineering ethics: “If your boss is coming to you and saying ‘I would like to have an application which is tracking people during their day to day work’ what is your decision? As developers, we need to say ‘no: this is not okay. I will not do this kind of thing’”. Read next: Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a ‘brain drain’ in the U.K. tech industry Follow Elliot Alderson on Twitter: @fs0c131y
Read more
  • 0
  • 0
  • 24216
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-selenium-and-data-driven-testing-an-interview-with-carl-cocchiaro
Richard Gall
17 Apr 2018
3 min read
Save for later

Selenium and data-driven testing: An interview with Carl Cocchiaro

Richard Gall
17 Apr 2018
3 min read
Data-driven testing has become a lot easier thanks to tools like Selenium. That's good news for everyone in software development. It means you can build better software that works for users much more quickly. While the tension between performance and the need to deliver will always remain, it's thanks to efforts of developers to improve testing tools that we are where we are today. We spoke to Carl Cocchiaro about data driven testing and much more. Carl is the author of Selenium Framework Design in Data-Driven Testing. He also talked to us about his book and why it's a useful resource for web developers interested in innovations in software testing today. What is data-driven testing? Packt: Tell us a little bit about data-driven testing. Carl Cocchiaro: Data-Driven Testing has been made very easy with technologies like Selenium and TestNG. Users can annotate test methods and add attributes like Data Providers and Groupings to them, allowing users to iterate through the methods with varying data sets. The key features Packt: What are the 3 key features of Selenium that makes it worth people's attention? CC: Platform independence, its support for multiple programming Languages, and its grid architecture that's really useful for remote testing. Packt: Could someone new to Java start using Selenium? Or are there other frameworks? CC: Selenium WebDriver is an API that can be called in Java to test the elements on a Browser or Mobile page. It is the Gold Standard in test automation, everyone should start out learning it, it's pretty fun to use. What are the main challenges of moving to Selenium? Packt: What are the main challenges someone might face when moving to the framework? CC: Like anything else, the language syntax has to be learned in order to be able to test the applications. Along with that, the TestNG framework coupled with Selenium has lots of features in Data-Driven Testing, and there's a learning curve on both. How to learn Selenium Packt: How is your book a stepping stone for a new Selenium developer? CC: The book details how to design and develop a Selenium Framework from scratch and how to build in Data-Driven Testing using TestNG and a Data Provider class. It's complex from the start but has all the essentials to create a great testing framework. They should get the basics down first before moving towards other types of testing like performance, REST API, and Mobile. Packt: What makes this book a must-have for anyone interested in or working with the tool? CC: Many Selenium guides are geared towards getting users up and running, but this is an advanced guide that teaches all the tricks and techniques I've learned over 30 years. Packt: Can you give people 3 reasons why they should read your book? CC: It's a must-read if designing and developing new frameworks, it circumvents all the mistakes users make in building frameworks, and you will be a Selenium Rockstar at your company after reading it! Learn more about software testing:  Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman
Read more
  • 0
  • 0
  • 24104

article-image-wolf-halton-on-whats-changed-in-tech-and-where-we-are-headed
Guest Contributor
20 Jan 2019
4 min read
Save for later

Wolf Halton on what’s changed in tech and where we are headed

Guest Contributor
20 Jan 2019
4 min read
The tech industry is changing at a massive rate especially after the storage options moved to the cloud. However, this has also given rise to questions on security, data management, change in the work structure within an organization, and much more. Wolf Halton, an expert in Kali Linux, tells us about the security element in the cloud. He also touches upon the skills and knowledge that should be inculcated in your software development cycle in order to adjust to the dynamic tech changes at present and in the future. Following this, he juxtaposes the current software development landscape with the ideal one. Wolf, along with another Kali Linux expert Bo Weaver were also interviewed on why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about the advantages and disadvantages for using Kali Linux for pentesting. We also asked them about what they think about pentesting in cybersecurity, in general. They have also talked about their stance on the role of pentesting in cybersecurity in their interview titled, “Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity” Security on Cloud The biggest change in the IT environment is how business leaders and others are implementing Cloud-Services agreements. It used to be a question of IF we would put some of our data or processes in the cloud, and now it is strictly a question of WHEN.  The Cloud is, first and foremost, a (failed) marketing term designed to obfuscate the actual relationship between the physical and logical networks.  The security protections cloud companies give you is very good from the cabling to the hypervisor, but above that, you are on your own in the realm of security.  You remain responsible for safeguarding your own data. The main difference between cloud architectures and on-premises architectures is that the cloud systems aren’t as front-loaded with hardware costs and software licensing costs. Why filling in the ‘skills gap’ is a must   The schools that teach the skills are often five or ten years behind in the technology they teach, and they tend to teach how to run tools rather than how to develop (and discard) approaches quickly.  Most businesses that can afford to have a security department want to hire senior-level security staff only. This makes a lot of sense, as the seniors are more likely to avoid beginner mistakes. If you only hire seniors, it forces apt junior security analysts to go through a lot of exploitative off-track employment before they are able to get into the field. Software development is not just about learning to code Development is difficult for a host of reasons – first off, there are only about 5% of the people who might want to learn to code, have access to the information, and can think abstractly enough to be able to code.  This was my experience in six years of teaching coding to college students majoring in computer networking (IT) and electrical engineering. It is about intelligence, yes, but of a group of equally intelligent people taught to code in an easy language like Python, only one in 20 will go past a first-year programming course. Security is an afterthought for IoT developers The internet if things (IoT) has created a huge security problem, which the manufacturers do not seem to be addressing responsibly.  IoT devices have a similar design flaw as that, which has informed all versions of Windows to this day. Windows was designed to be a personal plaything for technology-enthusiasts who couldn’t get time on the mainframes available at the time.  Windows was designed as a stand-alone, non-networked device. NT3.0 brought networking and “enterprise server” Windows, but the monolithic way that Windows is architected, along with the direct to kernel-space attachment of third-party services continues to give Windows more than its share of high and critical vulnerabilities. IoT devices are cheap for computers and since security is an afterthought for most developers, the IoT developers create marvelously useful devices with poor or nonexistent user authentication.  Expect it to get worse before it gets better (if it ever gets better). Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 24065

article-image-security-experts-wolf-halton-and-bo-weaver-discuss-pentesting-and-cybersecurity-interview
Guest Contributor
18 Jan 2019
4 min read
Save for later

Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity [Interview]

Guest Contributor
18 Jan 2019
4 min read
This is Part 2 of the interview with our two Kali Linux experts, Wolf Halton, and Bo Weaver, on using Kali Linux for pentesting. In their section, we talk about the role of pentesting in cybersecurity. Previously, the authors talked about why Kali Linux is the premier platform for testing and maintaining Windows security. They talked about the advantages and disadvantages for using Kali Linux for pentesting. There also talked about their love for the Kali platform. Wolf says, “Kali is a stable platform, based upon a major distribution with which I am very familiar.  There are over 400 security tools in the Kali repos, and it can also draw directly from the Debian Testing repos for even more tools.” Here are a few more questions, we asked them about what they think about pentesting in cybersecurity, in general. Can you tell us about the role of pentesting in cybersecurity? According to you, how has pentesting improved over the years? Bo Weaver: For one thing, pentesting has become an accepted and required practice in network security.  I do remember the day when the attitude was “It can’t happen here. so why should you break into my network?  Nobody else is going to.” Network security, in general, wasn’t even thought by most companies and spending money on network security was seen as a waste. The availability of tools has also grown in leaps and bounds.  Also, the availability of documentation on vulnerabilities and exploits has grown, and the awareness in the industry of the importance of network security has grown. Wolf Halton: The tools have gotten much more powerful and easier to use.  A pentester will still be more effective if they can craft their own exploits, but they can now craft it in an environment of shared libraries such as Metasploit, and there are stable pentesting platforms like Kali Linux Rolling (2018) that reduces the learning curve to being an effective pentester. Pentesting is rising as a profession along with many other computer-security roles.  There are compliance requirements to do penetration tests at least annually or when a network is changed appreciably. What aspects of pentesting do you feel are tricky to get past? What are the main challenges that anyone would face? Bo Weaver: Staying out of jail.  Laws can be tricky. You need to know and fully understand all laws pertaining to network intrusion for both, the State you are working in and the Federal laws.  In pen testing, you are walking right up to the line of right and wrong and hanging your toes over that line a little bit. You can hang your toes over the line but DON’T CROSS IT!  Not only will you go to jail but you will never work in the security field again unless it is in some dark corner of the NSA. [box type="shadow" align="" class="" width=""]Never work without a WRITTEN waiver that fully contains the “Rules of Engagement” and is signed by the owner or “C” level person of the company being tested.[/box] Don’t decide to test your bank’s website even if your intent is for good.  If you do find a flaw and report it, you will not get a pat on the back but will most likely be charged for hacking.  Especially banks get real upset when people poke at their networks. Yes, some companies offer Bug Bounty programs. These companies have Rules of Engagement posted on their site along with a waiver to take part in the program.  Print this and follow the rules laid out. Wolf Halton: Staying on the right side of the law.  Know the laws that govern your profession, and always know your customer.  Have a hard copy of an agreement that gives you permission to test a network.  Attacking a network without written permission is a felony and might reduce your available career paths. Author Bio Wolf Halton is an Authority on Computer and Internet Security, a best selling author on Computer Security, and the CEO of Atlanta Cloud Technology. He specializes in—business continuity, security engineering, open source consulting, marketing automation, virtualization and data center restructuring, network architecture, and Linux administration. Bo Weaver is an old school ponytailed geek. His first involvement with networks was in 1972 while in the US Navy working on a R&D project called ARPA NET. Bo has been working with and using Linux daily since the 1990's and a promoter of Open Source. (Yes, Bo runs on Linux.) He now works as the senior penetration tester and security researcher for CompliancePoint a Atlanta based security consulting company. Pentest tool in focus: Metasploit Kali Linux 2018.2 released How artificial intelligence can improve pentesting
Read more
  • 0
  • 0
  • 24047

article-image-listen-researcher-rowel-atienza-discusses-artificial-intelligence-deep-learning-and-why-we-dont-need-to-fear-a-robot-ruled-future-podcast
Richard Gall
08 Apr 2019
2 min read
Save for later

Listen: researcher Rowel Atienza discusses artificial intelligence, deep learning, and why we don't need to fear a robot-ruled future [Podcast]

Richard Gall
08 Apr 2019
2 min read
Artificial intelligence threats are regularly talked up by the media. This is largely because the area is widely misunderstood. The robot revolution and dangerous algorithms are, unfortunately, much sexier than math and statistics. Artificial intelligence isn't really that scary. And while it does pose many challenges for society, it's essential to remember that these are practical challenges that don't exist in some abstract realm. They are rather engineering and ethical problems that we can all help solve. In this edition of the Packt podcast, we spoke to Rowel Atienza about the reality of artificial intelligence. In particular we wanted to understand the practical realities behind the buzz. As an Associate Professor at the University of the Philipines researching numerous different aspects of artificial intelligence - and author of Advanced Deep Learning with Keras  - he's someone with experience and insight on what really matters across the field. Getting past the artificial intelligence hype with Rowel Atienza In the episode we discussed: The distinction between AI, machine learning and deep learning Why artificial intelligence is so hot right now The key machine learning frameworks - TensorFlow, PyTorch, and Keras How they compare and why Rowel loves Keras The importance of ethics and transparency Essential skills for someone starting or building a career in the field How far are we really are from AGI Listen here:  https://soundcloud.com/packt-podcasts/were-still-very-far-from-robots-taking-over-society-rowel-atienza-on-deep-learning-and-ai
Read more
  • 0
  • 0
  • 23947
article-image-blockchain-can-solve-tech-trust-issues-imran-bashir
Richard Gall
05 Jun 2018
4 min read
Save for later

Blockchain can solve tech's trust issues - Imran Bashir

Richard Gall
05 Jun 2018
4 min read
The hype around blockchain has now reached fever pitch. Now the Bitcoin bubble has all but burst, it would seem that the tech world - and beyond - is starting to think more creatively about how blockchain can be applied. We've started to see blockchain being applied in a huge range of areas; that's likely to grow over the next year or so. We certainly weren't surprised to see blockchain rated highly by many developers working in a variety of fields in this year's Skill Up survey. Around 70% of all respondents believe that blockchain is going to prove to be revolutionary. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. To help us make sense of the global enthusiasm and hype for blockchain, we spoke to blockchain expert Imran Bashir. Imran is the author of Mastering blockchain, so we thought he could offer some useful insights into where blockchain is going next. He didn't disappoint. Respondents to the Skill Up survey said that blockchain would be revolutionary. Do you agree? Why? I agree. The fundamental issue that blockchain solves is that of trust. It enables two or more mutually distrusting parties to transact with each other without the need of establishing trust and a trusted third party. This phenomenon alone is enough to start a revolution. Generally, we perform transactions in a centralised and trusted environment, which is a norm and works reasonably well but think about a system where you do not need trust or a central trusted third party to do business. This paradigm fundamentally changes the way we conduct business and results in significant improvements such as cost saving, security and transparency. Why should developers learn blockchain? Do you think blockchain technology is something the average developer should be learning? Why? Any developer should learn blockchain technology because in the next year or so there will be a high demand for skilled blockchain developers/engineers. Even now there are many unfilled jobs, it is said that there are 14 jobs open for every blockchain developer. The future will be built on blockchain; every developer/technologist should strive to learn it. What most excites you about blockchain technology? It is the concept of decentralisation and its application in almost every industry ranging from finance and government to medical and law. We will see applications of this technology everywhere. It will change our lives; just the way Internet did in the 1990s. Also, smart contracts constitute a significant part of blockchain technology, and it allows you to implement Contracts that are automatically executable an enforceable. This ability of blockchain allows you drastically reduce the amount of time it takes for contract enforcement and eliminates the need for third parties and manual processes that can take a long time to come into action. Enforcement in the real world takes a long time, in blockchain world, it is reduced to few minutes, if not seconds, depending on the application and requirements. What tools do you need to learn to take advantage of blockchain? What tools do you think are essential to master in order to take advantage of blockchain? Currently, I think there are some options available. blockchain platforms such as Ethereum and Hyperledger fabric are the most commonly used for development. As such, developers should focus on at least one of these platforms. It is best to start with necessary tools and features available in a blockchain, and once you have mastered the concepts, you can move to using frameworks and APIs, which will ease the development and deployment of decentralised applications. What do you think will be the most important thing for developers to learn in the next 12 months? Learn blockchain technology and at least one related platform. Also explore how to implement business solutions using blockchain which results in bringing about benefits of blockchain such as security, cost-saving and transparency. Thanks for taking the time to talk to us Imran! You can find Imran's book on the Packt store.
Read more
  • 0
  • 0
  • 23867

article-image-honeycomb-ceo-charity-majors-discusses-observability-and-dealing-with-the-coming-armageddon-of-complexity-interview
Richard Gall
13 Mar 2019
16 min read
Save for later

Honeycomb CEO Charity Majors discusses observability and dealing with "the coming armageddon of complexity" [Interview]

Richard Gall
13 Mar 2019
16 min read
Transparency is underrated in the tech industry. But as software systems grow in complexity and their relationship with the real world becomes increasingly fraught, it nevertheless remains a value worth fighting for. But to effectively fight for it, it’s essential to remember that transparency is a technological issue, not just a communication one. Decisions about how software is built and why it’s built in the way that it is lie at the heart of what it means to work in software engineering. Indeed, the industry is in trouble if we can’t see just how important those questions are in relation to everything from system reliability to our collective mental health. Observability, transparency, and humility One term has recently emerged as a potential solution to these challenges: observability (or o11y as it's known in the community). This is a word that has been around for some time, but it’s starting to find real purchase in the infrastructure engineering world. There are many reasons for this, but a good deal of credit needs to go to observability platform Honeycomb and its CEO Charity Majors. [caption id="attachment_26599" align="alignleft" width="225"] Charity Majors[/caption] Majors has been a passionate advocate for observability for years. You might even say Honeycomb evolved from that passion and her genuine belief that there is a better way for software engineers to work. With a career history spanning Parse and Facebook (who acquired Parse in 2011), Majors is well placed to understand, diagnose, and solve the challenges the software industry faces in terms of managing and maintaining complex distributed systems designed to work at scale. “It’s way easier to build a complex system than it is to run one or to understand one,” she told me when I spoke to her in January. “We’re unleashing all these poorly understood complex systems on the world, and later having to scramble to make sense of it.” Majors is talking primarily about her work as a systems engineer, but it’s clear (to me at least) that this is true in lots of ways across tech, from the reliability of mobile apps to the accuracy of algorithms. And ultimately, impenetrable complexity can be damaging. Unreliable systems, after all, cost money. The first step, Majors suggests, to counteracting the challenges of distributed systems, is an acceptance of a certain degree of impotence. We need humility. She talks of “a shift from an era when you could feel like your systems were up and working to one where you have to be comfortable with the fact that it never is.” While this can be “uncomfortable and unsettling for people in the beginning,” in reality it’s a positive step. It moves us towards a world where we build better software with better processes. And, most importantly, it cultivates more respect for people on all sides - engineers and users. Charity Majors’ (personal) history of observability Observability is central to Charity Majors’ and Honeycomb’s purpose. But it isn’t a straightforward concept, and it’s also one that has drawn considerable debate in recent months. Ironically, although the term is all about clarity, it has been mired in confusion, with the waters of its specific meaning being more than a little muddied. “There are a lot of people in this space who are still invested in ‘oh observability is a generic synonym for telemetry,’” Majors complains. However, she believes that “engineers are hungry for more technical terminology,” because the feeling of having to deal with problems for which you are not equipped - quite literally - is not uncommon in today’s industry. With all the debate around what observability is, and its importance to Honeycomb, Majors is keen to ensure its definition remains clear. “When Honeycomb started up… observability was around as a term, but it was just being used as a generic synonym for telemetry… when we started… the hardest thing was trying to think about how to talk about it... because we knew what we were doing was different,” Majors explains. Experimentation at Parse The route to uncovering the very specific - but arguably more useful - definition of observability was through a period of sustained experimentation while at Parse. “Around the time we got acquired... I was coming to this horrifying realisation that we had built a system that was basically un-debuggable by some of the best engineers in the world.” The key challenge for Parse was dealing with the scale of mobile applications. Parse customers would tell Majors and her team that the service was down for them, underlining Parse’s monitoring tools’ lack of capability to pick up these tiny pockets of failure (“Behold my wall of dashboards! They’re all green, everything is fine!” Majors would tell them). Scuba: The “butt-ugly” tool that formed the foundations of Honeycomb The monitoring tools Parse was using at the time weren’t that helpful because they couldn’t deal with high-cardinality dimensions. Put simply, if you wanted to look at things on a granular, user by user basis, you just couldn’t do it. “I tried everything out there… the one thing that helped us get a handle on this problem was this butt-ugly tool inside Facebook that was aggressively hostile to users and seemed very limited in its functionality, but did one thing really well… it let you slice and dice in real time on dimensions of arbitrarily high cardinality.” Despite its shortcomings, this set it apart from other monitoring tools which are “geared towards low cardinality dimensions,” Majors explains. [caption id="attachment_26601" align="alignright" width="225"] More than just a quick fix (Credit: Charity Majors)[/caption] So, when you’re looking for “needles in a haystack,” as Parse engineers often were, the level of cardinality is essential. “It was like night and day. It went from hours, days, or impossible, to seconds. Maybe a minute.” Observability: more than just a platform problem This experience was significant for Majors and set the tone for Honeycomb. Her experience of working with Scuba became a frame for how she would approach all software problems. “It’s not even just about, oh the site is down, debug it, it’s, like, how do I decide what to build?” It had, she says, “become core to how I experienced the world.” Over the course of developing Honeycomb, it became clear to Majors that the problems the product was trying to address were actually deep: “a pure function of complexity.” “Modern infrastructure has become so ephemeral you may not even have servers, and all of our services are far flung and loosely coupled. Some of them are someone else’s service,” Majors says. “So I realise that everyone is running into this problem and they just don’t have the language for it. All we have is the language of monitoring and metrics when... this is inherently a distributed systems problem, and the reason we can’t fix them is because we don’t have distributed systems tools.” Towards a definition of observability Looking over my notes, I realised that we didn’t actually talk that much about the definition of observability. At first I was annoyed, but in reality this is probably a good thing. Observability, I realised, is only important insofar as it produces real world effects on how people work. From the tools they use to the way they work together, observability, like other tech terms such as DevOps, only really have value to the extent that they are applied and used by engineers. [caption id="attachment_26606" align="alignleft" width="225"] It's not always easy to tell exactly what you're looking at (Credit: Charity Majors)[/caption] “Every single term is overloaded in the data space - every term has been used - and I was reading the dictionary definition of the word ‘observability’ and... it’s from control systems and it’s about how much can you understand and reason about the inner workings of these systems just by observing them from the outside. I was like oh fuck, that’s what we need to talk about!” In reality, then, observability is a pretty simple concept: how much can you understand and reason about the inner workings of these systems just by observing them from the outside. Read next: How Gremlin is making chaos engineering accessible [Interview] But things, as you might expect, get complicated when you try and actually apply the concept. It isn’t easy. Indeed, that’s one of the reasons Majors is so passionate about Honeycomb. Putting observability into practice Although Majors is a passionate advocate for Honeycomb, and arguably one of its most valuable salespeople, she warns against the tendency for tooling to be viewed as silver bullet solutions to problems. “A lot of people have been sold this magic spell idea which is that you don’t have to think about instrumentation or explaining your code back to yourself” Majors says. Erroneously, some people will think they “can just buy this tool for millions of dollars that will do it for you… it’s like write code, buy tool, get magic… and it doesn’t actually work, it never has and it never will.” This means that while observability is undoubtedly a tooling issue, it’s just as much a cultural issue too. With this in mind, you definitely shouldn’t make the mistake of viewing Honeycomb as magic. “It asks more of you up front,” Majors says. “There is no magic. At no point in the future are you going to get to just write code and lob it over the wall for ops to deal with. Those days are over, and anyone who is telling you anything else is selling you some very expensive magic beans. The systems of the future do require more of developers. They ask you to care a little bit more up front, in terms of instrumentation and operability, but over the lifetime of your code you reap that investment back hundreds or thousands of times over. We're asking you, and helping you, make the changes you need to deal with the coming Armageddon of complexity.” Observability is important, but it’s a means to an end: the end goal is to empower software engineers to practice software ownership. They need to own the full lifecycle of their code. How transparency can improve accountability Because Honeycomb demands more ‘up front’ from its users, this requires engineering teams to be transparent (with one another) and fully aligned. Think of it this way: if there’s no transparency about what’s happening and why, and little accountability for making sure things do or don’t happen inside your software, Honeycomb is going to be pretty impotent. We can only really get to this world when everyone starts to care properly about their code, and more specifically, how their code runs in production. “Code isn’t even interesting on its own… code is interesting when users interact with it,” Majors says. “it has to be in production.” That’s all well and good (if a little idealistic), but Majors recognises there’s another problem we still need to contend with. “We have a very underdeveloped set of tools and best practices for software ownership in production… we’ve leaned on ops to… be just this like repository of intuition… so you can’t put a software engineer on call immediately and have them be productive…” Observability as a force for developer well-being This is obviously a problem that Honeycomb isn’t going to fix. And yes, while it’s a problem the Honeycomb marketing team would love to fix, it’s not just about Honeycomb’s profits. It’s also about people’s well being. [caption id="attachment_26602" align="alignright" width="300"] The Honeycomb team (Credit: Charity Majors)[/caption] “You should want to have ownership. Ownership is empowering. Ownership gives you the power to fix the thing you know you need to fix and the power to do a good job… People who find ownership is something to be avoided - that’s a terrible sign of a toxic culture.” The impact of this ‘toxic culture’ manifests itself in a number of ways. The first is the all too common issue of developer burnout. This is because a working environment that doesn’t actively promote code ownership and accountability, leads to people having to work on code they don’t understand. They might, for example, be working in production environments they haven’t been trained to adequately work with. "You can’t just ship your code and go home for the night and let ops deal with it," Majors asserts. "If you ship a change and it does something weird, the best person to find that problem is you. You understand your intent, you have all the context loaded in your head. It might take you 10 minutes to find a problem that would take anyone else hours and hours." Superhero hackers The second issue is one that many developers will recognise: the concept of the 'superhero hacker'. Read next: Don’t call us ninjas or rockstars, say developers “I remember the days of like… something isn’t working, and we’d sit around just trying random things or guessing... it turns out that is incredibly inefficient. It leads to all these cultural distortions like the superhero hacker who does the best guessing. When you have good tooling, you don’t have to guess. You just look and see.” Majors continues on this idea: “the source of truth about your systems can’t live in one guy’s head. It has to live in a tool where everyone has access to the same information about the system, one single source of truth... Otherwise you’re gonna have that one guy who can’t go on vacation ever.” While a cynic might say well she would say that - it’s a product pitch for Honeycomb, they’d ultimately be missing the point. This is undoubtedly a serious issue that’s having a severe impact on our working lives. It leads directly to mental health problems and can even facilitate discrimination based on gender, race, age, and sexuality. At first glance, that might seem like a stretch. But when you’re not empowered - by the right tools and the right support - you quite literally have less power. That makes it much easier for you to be marginalized or discriminated against. Complexity stops us from challenging the status quo The problem really lies with complexity. Complexity has a habit of entrenching problems. It stops us from challenging the status quo by virtue of the fact that we simply don’t know how to. This is something Majors takes aim at. In particular, she criticises "the incorrect application of complexity to the business problem it solves." She goes on to say that “when this happens, humans end up plugging the dikes with their thumbs in a continuous state of emergency. And that is terrible for us as humans." How Honeycomb practices what it preaches Majors’ passion for what she believes is evidenced in Honeycomb's ethos and values. It’s an organization that is quite deliberately doing things differently from both a technical and cultural perspective. [caption id="attachment_26604" align="alignright" width="300"] Inside the Honeycomb HQ (Credit: Charity Majors)[/caption] Majors tells me that when Honeycomb started, the intention was to build a team that didn’t rely upon superstar engineers: “We made the very specific intention to not build a team of just super-senior expert engineers - we could have, they wanted to come work with us, but we wanted to hire some kids out of bootcamp, we wanted to hire a very well rounded team of lots of juniors and intermediates... This was a decision that I made for moral reasons, but I honestly didn’t know if I believed that it would be better, full disclosure - I honestly didn’t have full confidence that it would become the kind of high powered team that I felt so proud to work on earlier in my career. And yet... I am humbled to say this has been the most consistent high-performing engineering team that I have ever had the honor to work with. Because we empower them to collaborate and own the full lifecycle of their own code.” Breaking open the black boxes that sustain internal power structures This kind of workplace, where "the team is the unit you care about" is one that creates a positive and empowering environment, which is a vital foundation for a product like Honeycomb. In fact, the relationship between the product and the way the team works behind it is almost mimetic, as if one reflects the other. Majors says that "we’re baking" Honeycomb's organizational culture “into the product in interesting ways." [caption id="attachment_26603" align="alignleft" width="300"] Teamwork (Credit: Charity Majors)[/caption] She says that what’s important isn’t just the question of “how do we teach people to use Honeycomb, but how do we teach people to feel safe and understand their giant sprawling distributed systems. How do we help them feel oriented? How do we even help them feel a sense of safety and security?"   Honeycomb is, according to Majors, like an "outsourced brain." It’s a product that means you no longer need to worry about information about your software being locked in a single person’s brain, as that information should be available and accessible inside the product. This gives individuals safety and security because it means that typical power structures, often based on experience or being "the guy who’s been there the longest" become weaker. Black boxes might be mysterious but they're also pretty powerful. With a product like Honeycomb, or, indeed, the principles of observability more broadly, that mystery begins to lift, and the black box becomes ineffective. Honeycomb: building a better way of developing software and developing together In this context, Liz Fong-Jones’ move to Honeycomb seems fitting. Fong-Jones (who you can find on Twitter @lizthegrey) was a Staff SRE at Google and a high profile critic of the company over product ethics and discrimination. She announced her departure at the beginning of 2019 (in fact, Fong-Jones started at Honeycomb in the last week of February). By subsequently joining Honeycomb, she left an environment where power was being routinely exploited, for one where the redistribution of power is at the very center of the product vision. Honeycomb is clearly a product and a company that offers solutions to problems far more extensive and important than it initially thought it would. Perhaps we’re now living in a world where the problems it’s trying to tackle are more profound than they first appear. You certainly wouldn’t want to bet against its success with Charity Majors at the helm. Follow Charity Majors on Twitter: @mipsytipsy Learn more about Honeycomb and observability at honeycomb.io. You can try Honeycomb for yourself with a free trial.
Read more
  • 0
  • 0
  • 23721

article-image-understanding-the-fundamentals-of-analytics-teams-with-john-k-thompson
Expert Network
06 Apr 2021
6 min read
Save for later

Understanding the Fundamentals of Analytics Teams with John K. Thompson

Expert Network
06 Apr 2021
6 min read
Key-takeaways:   Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy.  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity.  Managing a successful analytics team and individual analytics professionals is different than managing any other type of team.  Data and analytics will be ubiquitous in the very near future. Analytics teams are different than any other team in the organization and analytics professionals are unique variant of creative professionals. Providing challenging, interesting and valuable work in the form of a personal project portfolio of work for a data scientist can be done and needs to be done to ensure productivity, job satisfaction, value delivery, and retention.  We interviewed Analytics Leader, and bestselling author, John K Thompson on data analytics, the future of analytics and his recent book, Building Analytics Teams. The interview in detail:  1. What are the fundamental concepts of building and managing a high-performing analytics team?  It is critically important to remember that data scientists are creative and intelligent people. They cannot be managed well in a command-and-control environment.  Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy. If they have a portfolio of projects and can manage their time and effort, the productivity of the team will be much higher than what is typically seen in teams managed in a traditional manner.  The relationship of the analytics leader with their peers and executives of the company is critically important to the success of the analytics team.  It is very important to realize that most analytics project fail at the point of where analytical models are to be implemented in production systems. 2. Tell us about your book, Building Analytics Teams. How is your book new and/or different from other books on Data Analytics?   Building Analytics Teams is focused on the practical challenges faced by people who are building and managing high performance analytics teams and the staff members who make up those analytics teams.    The book is different from other books in that it examines the process of building and managing a team from a holistic view.  The book considers the organization framework, the required processes, the people, the projects, the problems, and pitfalls.    The content of the book guides the reader through how to navigate these challenges and provides illustrations and examples of how to be successful.  The book is a “how to” guide on how to successfully manage the analytics process in a large corporate environment.  3. What was the motivation behind writing this book?   I have not seen a book like this, and I wish I had a book like this earlier in my career.  I have built a number of analytics teams. While building and growing those teams, I noticed certain recurring patterns. I wanted to address the misconceptions and the misperceptions people hold about analytics teams.    Analytics teams are unique. The team members who are successful have a different mindset and attitude toward project work and team work. I wanted to communicate the differences inherent in a high-performance analytics team when compared to other teams.  Also, I wanted to communicate that managing a successful analytics team and individual analytics professionals is different than managing any other type of team.    I wanted to write a guide for managers and analytics professionals to help them understand how the broader organization views them and how they can interface and interact with their peers in related organizational functions to increase the probability of joint success.  4. What should be the starting point for data analytics enthusiasts aiming to begin their journey in Data Analytics? How do you think your book will help them in their journey?  It depends on where they are starting their journey.    If they are in the process of completing their undergraduate or graduate studies, I would suggest that they take classes in programming, data science or analytics.    If they are professionals, I would suggest that they take classes on Coursera, Udemy or any other on-line educational platform to see if they have a real interest in, and affinity for, analytics.  If they do have an interest, then they should start working on analytics for themselves to test out analytical techniques, apply critical thinking and try to understand what they can see or cannot see in the data.  If that works out and their interest remains, they should volunteer for projects at work that will enable them to work with data and analytics in a work setting.  If they have the education, the affinity and the skill, then apply for a data science position.  Grab some data and make a difference!  5. What are the key skills required for someone to be successful working in Data Analytics? What are the pain points/challenges one should know?  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity. Without curiosity, you will find it difficult to be successful as a data scientist.  It helps to be talented and well educated, but I have met many stellar data scientists that are neither.  Beyond those traits, it is more important to be diligent and persistent.  The most successful business analysts and data scientists I have ever worked with were all naturally and perpetually curious and had a level of diligence and persistence that was impressive.  As for pain points and challenges; data scientists need to work on improving their listening skills, their written & verbal communication and presentation skills.  All data scientists need improvement in these areas.  6. What is the future of analytics? What will we see next?  I do believe that we are entering an era where data and analytics will be increasing in importance in all human endeavors. Certainly, corporate use of data and analytics will increase in importance, hence the focus of the book.    But beyond corporations, the active and engaged use of data and analytics will increase in importance and daily use in managing multiple aspects of - people’s personal lives, academic pursuits, governmental policy, military operations, humanitarian aid, tailoring of products and services; building of roads, towns and cities, planning of traffic patterns, provisioning of local federal and state services, intergovernmental relationships and more.    There will not be an element of human endeavor that will not be touched and changed by data and analytics. Data is ubiquitous today and data and analytics will be ubiquitous in the very near future.  We will see more discussions on who owns data and who should be able to monetize data.  We will experience increasing levels of AI and analytics across all systems that we interact with, and most of it will be unnoticed and operate in the background for our benefit.  About:  John K. Thompson is an international technology executive with over 30 years of experience in the business intelligence and advanced analytics fields. Currently, John is responsible for the global Advanced Analytics and Artificial Intelligence team and efforts at CSL Behring. 
Read more
  • 0
  • 0
  • 23686
article-image-kong-cto-marco-palladino-on-how-the-platform-is-paving-the-way-for-microservices-adoption-interview
Richard Gall
29 Jul 2019
11 min read
Save for later

Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview]

Richard Gall
29 Jul 2019
11 min read
“The service control platform is the next-gen of traditional API management,” Kong CTO and co-founder Marco Palladino tells me. “It’s not about APIs any more, it’s about services.” This shift in the industry is what makes Kong so interesting. It’s one of the reasons I wanted to speak to Palladino. Its success is an index of businesses’ technological priorities today, and useful as an indicator of the way the world is going - one, it’s safe to say, that’s increasingly cloud-native and highly distributed. As part of a broad and growing cloud-native ecosystem, Kong is playing an important role in the digital transformation of thousands of companies around the world. Furthermore, the fact that it follows an open core model, with an open source version of Kong made available in 2015, underlines the way in which the platform occupies a valuable position in the valley between developer enablement and managerial control. This isn’t always an easy place to be. 'Digital transformation' is a well-worn phrase, but behind it is the messy truth about the reality of how companies use technology: at their own pace and often shaped by necessity rather than best practice. So, with Kong a useful proxy for the state of the software industry, I wanted to dive deeper into Kong’s challenges, as well as the opportunities the platform can potentially unlock for its users. What is Kong? Before going any further it’s probably worth explaining what Kong actually is. Essentially, Kong is an API management platform - it allows teams to manage how services interact and move within their architecture. [caption id="attachment_29326" align="alignright" width="248"] via konghq.com[/caption] “APIs allow information to be in flight within our systems,” Palladino explains. Information can, he continues, either be “at rest in a database” or “in use by a monolith or microservice.” Naturally then, it follows that “the more we decouple - the more we distribute our applications - the more information will be… in flight.” This is why Palladino believes Kong is so valuable today. The “flight” of information (he never uses the word “data”) necessarily implies a network and, as anyone familiar with L. Peter Deutsch’s 7 Fallacies of Distributed Computing will know, “the network is unreliable.” “So how do we protect that communication? How do we secure it? How do we connect it? How do we route that transmission?” Palladino says. “The more we decouple, the more we distribute, the more those problems become critical, if not essential, for a successful microservice organization… what Kong provides is a platform that allows us to intelligently broker the flow of information across the organization.” Why does the world need Kong? Do we really need another API management solution? The short answer to this is relatively straightforward: the world is moving toward (micro)services and Kong provides you with a way of managing them. This control is crucial, moreover, because “in microservices, being slow is the new down - if you’re slow, you’re down.” But that’s only half of the picture. This “new world” is still in development and transition with each organization following its own technological path. Kong is necessary because it supports and facilitates these unique transitions, all of them happening in different ways around the world. “Kong is a platform agnostic system that can run across different architectures, but most importantly it can run across different platforms,” Palladino says. “While we do work very well with Kubernetes, we also support… traditional legacy virtual machines or bare metal infrastructures. And the reason we do support both the modern and the old is that we’re working with enterprise organizations… [who] might be deploying new greenfield applications in Kubernetes or OpenShift but… still have a significant part of their software running in traditional virtual machines.” One of Kong’s strengths, Palladino suggests, is its pragmatism and the way in which the company is alive to their customer’s respective levels of technological maturity. “I’m very proud to say we’re a very pragmatic company. While we do work with developers to make sure that Kong is a leader in what we do in microservices and traditional API management, we’re also very pragmatic - we understand that’s the end goal, it's not necessarily the current state of affairs in our enterprise organizations.” Read next: It’s Black Friday: But what’s the business (and developer) cost of downtime? “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers.” Kong sees itself as a 'strategic technology partner' However, while every organization has its own timeline when it comes to technology, its CTO describes Kong as a platform that is paving a way for the future rather than simply catering to the needs of its customers. “We’re not an industry follower, we’re an industry leader,” says Palladino. “We’re looking at these large scale systems that organizations are creating and we’re thinking how can we make that better from a security standpoint, from a discoverability standpoint, from a documentation standpoint?” This isn’t just Silicon Valley posturing. As the software world moves toward cloud and microservices, the landscape shifts at a much faster rate. That makes it essential for organizations like Kong to pave the way for the future rather than react to the needs and demands of their customers. In turn, this means the scope of Kong’s work is growing. “We’re not just a vendor. We don’t give you the platform and then let you figure it out. We want to be a strategic technology partner with our customers,” says Palladino. “We engage with them, not just from a low-level standpoint with the teams, but we also engage... from a higher level executive standpoint, because we want to enable not just the technology but the business itself to be successful.” This is something Palladino is well aware of. Kong’s customers aren’t, after all, needlessly indulging in “an exercise in adopting new technologies,” but are rather doing so in response to business requirements. Having a more extensive relationship - or partnership, as Palladino puts it - ensures that digital transformation is both impactful and relatively risk free. "You simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software." Open source and the rise of bottom-up software adoption However, although Kong positions itself as a company attuned to the business needs of their customers, it’s also clear that it understands the developer’s power in today’s technology ecosystem. Palladino sees open source as playing a big part in this. And as an open core platform, Kong is able to build a community of creative and innovative developers around the wider product ecosystem. But Palladino is also keen to point out that you can’t really separate open source and the API and microservices revolutions. “10 years ago APIs used to be a nice-to-have” Palladino says. The approach was, he explains, little more than a kind of curiosity or a desire for a small community around a platform: “let’s open up some APIs, let’s open up this monolithic black box and see what happens.” However, “it’s not like that any more." If “APIs are the core business of every organization,” as Palladino puts it to me, “then you simply can’t afford to have a black box at the center of your infrastructure. You need to know what’s happening and how services are interacting with one another - the way of achieving this is through open source software.” “When we look at the microservices transition, we look at Docker, we look at Kubernetes, we look at Elastic, we look at Zipkin… Kafka… Kong, what’s the baseline? Open source. Each one of these products is open source at their core. Open source is driving this new transformation within the enterprise,” says Palladino. Palladino continues on this, offering a compelling narrative of why open source has become the dominant form of software. He begins with the problems posed by traditional central IT, “an ivory tower far from the business, far from real usage” which consequently “were not able to iterate fast enough to be able to answer those business requirements.” “The teams building the apps were closer to the business, closer to the customer, and they had to pick the right solution in order to be successful. And so what these… teams did was to go into self-service ecosystems - like... CNCF [Cloud Native Computing Foundation] - and pick and choose open source technologies they could adopt without having to go through an enterprise process… that’s why open source became more important - because it allowed them to be in production and get business value without having to deal with the bureaucracy of central IT - so it’s a bottom-up adoption from the teams all the way up as opposed from central IT to all the teams.” Developer freedom and organizational control Palladino refers to ‘bottom-up’ adoption a number of times throughout our conversation. He argues that it’s an industry shift that has been initiated by microservices. “With the emergence of microservices something happened in the industry - software, is not being sold top down anymore as much as it used to be - it’s more bottom-up adoption.” He also explains that having an open source element to the Kong offering is actually helping the company to grow. It’s a useful onboarding route. “Sometimes - often actually - Kong is being adopted just because the organization happens to already be running Kong in production, and you need enterprise features and enterprise support,” says Palladino. But while developer power seems to be part of this new post-central IT world, it also makes Kong even more valuable for those in leadership positions. Taking the example of multi-cloud, Palladino explains saying that “it’s very rare to see a CIO saying we would like to be multi cloud. Sometimes it happens, [but] most likely the organization is already multi-cloud because it naturally evolved to be multi-cloud. Different teams, different products using different clouds, different services.” With the wealth of tools, platforms and environments being used by forward-thinking developers trying to solve the problems in their immediate vicinity, it makes sense that the “C-Level Executives” who express an interest in Kong are looking for “a way to consolidate and standardize how their APIs and microservices are being managed and secured across multiple clouds, across multiple platforms.” A big concern for the leadership of the top Global 5000 organizations we’re working with… [is] making sure they can consolidate how security is being done, how monitoring is being done, how observability and enablement is being done across multiple clouds,” Palladino says. Read next: Honeycomb CEO Charity Majors discusses observability and dealing with “the coming armageddon of complexity” [Interview] The future of Kong and API management The future for Kong looks bright. The two new features released by the platform - Kong Brain and Kong Immunity - launched earlier this year, signal what the broader trends might be in the software infrastructure and systems engineering space. Both are backed by artificial intelligence, and provide incredibly cutting edge ways to manage the reliability and security of the services inside your infrastructure. Kong Brain, Palladino explains, lets you “listen to… runtime traffic to auto generate documentation for APIs… services, and monoliths” that organizations have no visibility on “after 20 years of running them in production.” Essentially then, it’s a tool that will prove incredibly useful in working with legacy software; it will certainly encourage the ‘lift and shift’ mentality that we’re starting to see emerge. Kong Immunity, meanwhile, is a security tool that uses machine learning to detect anomalies in traffic - allowing users to identify security threats and breaches within their system. “Traditional web application firewalls… don’t work within east-west traffic [server to server],” Palladino says. “They work perhaps in north-south traffic [client to server], but they’re slow, they’re very heavy weight.” Kong, then “tries to take away that security concern by providing a machine learning platform that can asynchronously, with no performance loss, learn from existing traffic across every version of every microservice.” With releases like these, it’s hard to dispute Palladino’s assertion that Kong is indeed an ‘industry leader.’ However, as Palladino also appears to be aware of, to be truly successful, it’s not enough to just lead the industry - you have to make sure you can bring people with you. Learn more about Kong here, and follow Marco Palladino on Twitter.
Read more
  • 0
  • 0
  • 23423

article-image-the-vue-js-community-is-one-of-vues-biggest-selling-points-marina-mosti-on-vue-and-javascript-in-2019-interview
Richard Gall
08 Nov 2019
9 min read
Save for later

"The Vue.js community is one of Vue's biggest selling points" - Marina Mosti on Vue and JavaScript in 2019 [Interview]

Richard Gall
08 Nov 2019
9 min read
Vue occupies an interesting position in the triumvirate of frontend JavaScript frameworks. Not hyped to the extent that React is, and not as established as Angular, it’s spent the last couple of years quietly minding its business and building an engaged and enthusiastic community of developers. One of these developers is Marina Mosti: her book Building Forms with Vue.js is just the latest step in her career journey from a backend developer getting frustrated with Java to Vue.js evangelist and educator. She’s a great person to explain the attraction of Vue.js, and to provide an insight into how she first entered the community - luckily, I was able to chat with her. Buy Building Forms with Vue.js on the Packt store. Her background is interesting: “I actually started out as a PHP developer and found myself in a position where I forced myself into learning front end. It’s not until very recently that I started doing front end in terms of it being my main focus” she says. Rather than moving deeper down into the stack, she has gone the other way, gravitating towards the front end. That might not be completely conventional, but it’s also indicative of the evolution of both JavaScript and frontend development in general. Read the first chapter of the book for free on the Packt platform. Rethinking JavaScript Things today, Marina suggests, are quite different. She’s quick to tell me, for example, that “the current state of JavaScript is just so much better than what it used to be,” and recalls a general antipathy towards JavaScript in the early years of her career that she “dragged… around for many years.” “I went to this very small school where they taught us the basics of front end development” she explains. “And for good or for bad the teachers were always very adamant about saying ‘oh don’t bother learning vanilla JavaScript because we have jQuery and vanilla JavaScript is so bad that you’re not gonna use it.’” However, the framework boom changed this. “I got to a point where all these cool frameworks were coming out and you just realise ‘hey I don’t know javascript; I know jquery so how do I make this jump?’” This ‘jump’ wasn’t without challenges. “Just having to go back and learn the basics - that has been the most ‘challenging’ part of going onto front end - because the front end that I knew [at the time] was php side-generated HTML code with maybe a little bit of javascript, maybe some CSS.” Read next: Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API Laravel and Vue.js Marina explains that she first encountered Vue while using Laravel. “We... wanted to take advantage of this built-in connection that Vue had with Laravel. Obviously this didn’t involve any fancy set up, there was no vue CLI, nothing of this good pre-compiling or anything, it was just injecting vue into the html and creating the components there on the fly” However, discovering Vue in this way proved to be revelatory: it actually underlined what makes it, in Marina’s view, a great front end framework. “You don’t really have to commit to the whole framework,” she says; “it just got the job done for what I needed at that moment.” What problems can Vue.js solve? Marina has a pretty clear perspective on the challenges in front end development. “The problems we’re currently facing in front end development... is that people don’t want the old browsing experience where you’re clicking around and having to wait for reloads” she says. “We want to make applications… that are very performant, they have a great user experience, you are not waiting for page refreshes - it flows. What we are looking for is having your applications flow in a way that makes sense, like if you were using a desktop application.” Vue is particularly good for this, Marina explains, because of its “reusable component structure.” To a certain extent, this way of working is just another example of the wider trend across engineering towards breaking things apart. “You are trying to make these small units of code, that do this specific thing… a very common way to describe it is like this lego system where you’re just putting pieces on top of each other and it just starts making sense.” I’ve heard that analogy before in relation to containers - there’s clearly a recurring theme that’s evolving out of core principles of design. If Vue is well-suited to building lightweight but highly performant front ends, another important element is that it makes development relatively easy. Again, Marina contrasts working with Vue today with what JavaScript development looked like in the past. “You used to have... these massive massive amounts of code, and code separation of concerns was just very complicated to manage. You had to [do] a lot of overhead and work... trying to figure out how are we going to structure these files so that it makes sense.” The Vue CLI Tooling was also more complicated; something that the Vue CLI has helped to solve.“You had to deal with - at a very intricate level - how tools like Grunt worked, for example. And now you have like these pre-build tools like the Vue CLI which allow you to not have to really think about things," she says. “You don’t have to think like ‘hey, how is this going to get compiled? How is webpack going to figure things out?’ At least not at an entry level because you have it all neatly packed in this box for you with the Vue CLI.” Comparing Vue.js to React and Angular Although it’s clear that Marina is incredibly passionate and enthusiastic about Vue, she’s also circumspect about ranking JavaScript frameworks against each other. “All 3 frameworks are fantastic. All 3 get the job done. This is like asking someone why is this your favorite flavor of ice cream?!” Vue.js v. React She notes that React and Vue have a lot in common. “They share a lot of similarities - both of them use a virtual DOM… they both have this reactive component structure.” However, the key difference, from Marina’s perspective, is JSX. “If you’re talking about React, you’re talking about JSX, this approach where everything is javascript. You’re writing HTML inside the JavaScript, you’re writing CSS inside the JavaScript.” It’s JSX that puts Marina off React, saying that the way it requires you to work “doesn’t really click. I know how to do it, but just in the way I like to code things I prefer just having the separation where HTML is HTML, and where CSS is CSS.” Want to learn React.js? Search the latest React eBooks and videos. Read next: Ionic React released; Ionic Framework pivots from Angular to a native React version Vue.js v. Angular Angular, meanwhile, is “great for enterprise projects where you need this huge huge framework” says Marina. “But that also comes at a cost of having to know all the framework. All the libraries, everything that Angular brings to the table, you have to know TypeScript - it's just very opinionated at what it does and sometimes the shoe is going to be very big for the project.” So, for Marina, Vue has a degree of flexibility. It’s not as opinionated as Angular, and it doesn’t require you to write using JSX. “It can grow up until the point you need it to… From the smallest component in your application to powering full enterprise solutions.” And related to this, it means Vue is accessible - the learning curve isn’t that steep. “Vue is just very gentle in that you can start using it; you can start building things right away.” “There's a very good pay off in making yourself an expert in it... once you start getting into the core of Vue, and understanding all the little tools that are at your disposal… you can start building upon this knowledge... the framework can grow and adjust to what you’re needing.” Search the latest Angular eBooks and videos. The Vue.js community There are undoubtedly many technical reasons to consider using Vue. However, another aspect that Marina emphasises throughout our conversation is how welcoming and supportive the Vue community is. “The Vue community is one of the biggest selling points of why you should pick Vue and why Vue is so amazing.” The Vue community was, Marina says, integral in getting her to where she is now. “I was, at one point, not very into vue… and I just found a very welcoming community and a very inclusive community... People in the community care about other developers that are getting into Vue. We try our best to make this feel like a very safe, inclusive community to just get people in here and get them developing vue and help them out with the problems they’re having.” Marina deserves credit for playing a part in fostering a welcoming and supportive culture. Not only has she created a wealth of learning materials (such as a great free introductory tutorial series), she also works closely with Vue Vixens, and provides mentoring and support for other women finding their way in the industry. “This focus on education just basically became my goal… Hey, let’s do things to teach people, to get more people involved with vue,” she says. In an industry that’s sometimes defined by hyper-competitiveness, and marred by toxicity, it’s certainly a worthwhile and important goal. It’s something we can all work at. Follow Marina Mosti on Twitter: @MarinaMosti Follow Vue Vixens on Twitter: @VueVixens
Read more
  • 0
  • 0
  • 23406