DEVOPS Interview
DEVOPS Interview
Git
Jenkins
Selenium
Puppet
Chef
Ansible
Nagios
Docker
Monit
ELK Elasticsearch, Logstash, Kibana
Collectd/Collect
Git(GitHub)
3. What are the core operations of DevOps in terms of development and Infrastructure?
Application development
Code developing
Code coverage
Unit testing
Packaging
Deployment With infrastructure
Provisioning
Configuration
Orchestration
Deployment
4. What are the advantages of DevOps with respect to Technical and Business perspective?
Technical benefits:
Business benefits:
Production Development
Creation of the production feedback and its development
IT Operations development
DevOps is a process
Agile is same as DevOps.
Separate group for are framed.
It is problem solving.
Developers managing production
DevOps is development-driven release management
Agile:
Devops:
Python
Vagrant used virtual box as the hypervisor for virtual environments and in current
scenario it is also supporting the KVM. Kernel-based Virtual Machine
Vagrant is a tool that can create and manage environments for testing and developing
software.
Are you interested in learning DevOps? We have the comprehensive DevOps Training Coursesto
give you a head start in your career.
12. What are the major difference between the Linux and Unix operating systems?
Unix:
Linux:
Linux has probably been home to every programming language known to humankind.
These are used for personal computers.
The LINUX is based on the kernel of UNIX operating system.
13. How we can make sure new service is ready for the products launched?
Backup System
Recovery plans
Load Balancing
Monitoring
Centralized logging
Give your career a big boost by going through our DevOps Training Videos!
17. The top 10 skills the person should be having for the DevOps position?
In AWS,
To handle revision control, post your code on SourceForge or GitHub so everyone can view it
and ask the viewers to give suggestions for the better improvement of it.
GET
HEAD
PUT
POST
PATCH
DELETE
TRACE
CONNECT
OPTIONS
http://interviewquestionstutorials.com/tag/50-top-devops-interview-questions-and-answers-pdf/
Basic Questions
1) DevOps ! How can you define it in your words ?
Its highly effective daily collaboration between software developers and IT operations /
web operation engineers to produce a working system or release software.
Scrum is used to manage complex software and product development, using iterative and
incremental practices. Scrum has three roles ie product owner, scrum master, and team.
Technical Questions
6) Have you worked on containers ?
Containers are form of lightweight virtualization, more heavy than chroot but lighter than
hypervisors. They provide isolation among processes while using same kernel as the host
machine, and cgroups functionality within kernel. But container formats differ among
themselves in a way that some provide more VM-like experience while other containerize
only application.
LXC containers are most VM-like and most heavy weight, while Docker used to be more
light weight and was initially designed for single application container. But in more
recent releases Docker introduced whole machine containerization features so now
Docker can be used both ways. There is also rkt from CoreOS and LXD from Canonical,
which builds upon LXC.
Note: Other available and popular CI tools are Jenkins, TeamCity, CircleCI ,
Hudson, Buildbot etc
Currently, several scripting languages are available so the question arises : what is the
most appropriate language for DevOps approach? Simply everything , it depends on the
context of the project and tools used for example if Ansible used its good have
knowledge in Python and if its for Chef its on Ruby.
13) What is the purpose of CM tools and which one you have
used ?
Configuration Management tools' purpose is to automatize deployment and configuration
of software on big number of servers. Most CM tools usually use agent architecture
which means that every machine being manged needs to have agent installed. My favorite
tool is one that uses agentless architecture - Ansible. It only requires SSH and Python.
And if raw module is being used, not even Python is required because it can run raw bash
commands. Other available and popular CM tools are Puppet, Chef, SaltStack.
SaaS is peace of software that runs over network on remote server and has only user
interface exposed to users, usually in web browser. For example salesforce.com.
The EC2 service is inseparable from the concept of Amazon Machine Image - AMI . The
May is Indeed the image of a virtual machine That Will Be Executed . EC2 based on
XEN virtualization , that's why it is quite easy to move XEN servers to EC2 .
However, when it comes to the data layer, relational databases (RDBMS) does not allow
a passage to the simple scale and do not provide a flexible data model. Manage more
users means adding more servers and large servers are very complex, owners and
disproportionately expensive, in contrast to low-cost hardware, the "commodity
hardware", architectures in the cloud. Organizations are beginning to see performance
issues with their relational databases for existing or new applications. Especially as the
number of users increases, they realize the need for a faster and more flexible basis. This
is the time to begin to assess and adopt NoSQL database like in their Web applications.
20) What are the main SQL migration difficulties NoSQL ?
Each record in a relational database according to a schema - with a fixed number of fields
(columns) each having a specified object and a data type. Each record is the same. The
data is denormalized in several tables. The advantage is that there is less of duplicate data
in the database. The downside is that a change in the pattern means performing several
"alter table" that require expensive to lock multiple tables simultaneously to ensure that
change does not leave the database in an inconsistent state.
With databases data, on the other hand, each document can have a completely different
structure from other documents. No additional management is required on the database to
manage changes in the schemes.
flexible data model data can be inserted without a defined schema and format of the
data that is inserted can change at any time , providing extreme flexibility , which
ultimately allows a significant agility to business
Consistent , high-performance Advanced NoSQL database technologies are putting
cache data , transparently, in system memory ; a behavior that is completely
transparent to the developer and the team in charge of operations .
Some easy scalability NoSQL databases automatically propagate data between
servers , requiring no participation applications. Servers can be added and removed
without disruption to applications , with data and I/O spread across multiple
servers.
Components of Ansible
Playbooks : Ansible playbooks are a way to send commands to remote computers in a
scripted way. Instead of using Ansible commands individually to remotely configure
computers from the command line, you can configure entire complex environments by
passing a script to one or more systems.
Ansible playbooks are written in the YAML data serialization format. If you don't know
what a data serialization format is, think of it as a way to translate a programmatic data
structure (lists, arrays, dictionaries, etc) into a format that can be easily stored to disk.
The file can then be used to recreate the structure at a later point. JSON is another
popular data serialization format, but YAML is much easier to read.
Let's look at a basic playbook that allow us to install a web application (nginx) in a
multiple hosts :
hosts: webservers
tasks:
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
handlers:
- name: start nginx
service: name=nginx state=started
The hosts file : (by default under /etc/ansible/hosts) this is the Ansible Inventory file, and
it stores the hosts, and their mappings to the host groups (webservers ,databases etc)
[webservers] 10.0.15.22
# example of setting a host inventory by IP address.
# also demonstrates how to set per-host variables.
[repository_servers] example-repository
#example of setting a host by hostname. Requires local lookup in /etc/hosts
# or DNS.
[dbservers] db01
The SSH key : For the first run, we'll need to tell ansible the SSH and Sudo passwords,
because one of the thing that the common role does is to configure passwordless sudo,
and deploy a SSH key. So in this case ansible can execute the playbooks commands in
the remote nodes (hosts ) and deploy the web application nginx.
http://www.interviewquestionspdf.com/2017/01/top-60-aws-devops-real-time-interview.html
1. What have you been doing over the last 1-2 years?
This will help you, as a DevOps manager, to understand with what specific tools and
technologies the engineer has been working over the past few years (these can include
Git, Puppet, Jenkins, Docker, Ansible, and scripting languages). Also, it will reveal
the candidates ability to work in a team as the candidate will most likely divulge
whether he or she flew solo or was part of a bigger outfit. If the persons answer does
not include this information, then that is another must-ask question.
It is critical to take note of the roles in which the candidate has served and the tasks
that the candidate has performed, even if they are not strictly required in your
organization or in the role for which he or she is interviewing. If the prospect does not
mention the exact tools that you currently use, follow up with questions about those
tools and tasks to get a good feel for his or her ability to assimilate knowledge as well
as his or her general operating dependencies. Good candidates will always
demonstrate a deep understanding in the field of their operation while others will
reply with superficial answers to drill-down follow-up questions.
This question is critical for any DevOps position. As more and more DevOps teams
move towards automating and adopting continuous delivery best practices, it is critical
to gauge whether the candidate is comfortable talking about code deployment and
whether he or she understands how all of the available continuous integration
tools and DevOps tools fit together. If you have a drawing board available, let him or
her build a diagram for you.
Depending on the answers that you get, you can develop further lines of questioning
dynamically. For example: Do you have a database in the stack? How do you
update the schema? What tests do you run, and how do you run them? If all tests
pass, how is the code deployed into production? How do you make sure that you do
not lose traffic during deployment?
A good way of assessing the suitability of a candidate is to ask them to tell the story of
a failed deployment and how it was handled. Specific, follow-up questions can
include: How do you know there was a deployment failure? Do you roll back
automatically? and What criteria do you use?
Again, you could use the storytelling tactic: Tell me about a crisis in production that
you had, how you became aware of it, and how it was solved. A good war story is
always enlightening it will help you to assess not only how skilled the candidates
are in monitoring but also how they handle crises (assuming that they tell the truth, of
course).
Other leading questions: What monitoring tools do you work with? Did you
choose them? If so, why? and How do you get alerted? I have found that the best
candidates will have plenty to share about their monitoring expertise and specifically
about advanced user-experience monitoring techniques.
Of course, this question and the responses can vary, but the idea is to gauge the
technical expertise of the engineer in a Linux environment, which is a must in
almost all DevOps positions.
Its a good idea to change the bash command as you receive the answers. If you feel
the questions are too easy, try raising the bar with more advanced bash questions. For
example, what is the difference between cmd1 ; cmd2 and cmd1 && cmd2?
You might want to prepare a quiz sheet with a list of five to ten commands. This way,
the candidate will find it easier to answer.
6. Without using Docker, can you see the processes running inside a container
from the outside?
Ok, we cheated here. Not every company is using Docker or even containers at all, so
this question is a bit technology-specific. Based on our expertise and on the data
in The 2016 DevOps Pulse survey that we recently released, more and more
companies are moving to microservices and containerized architectures. So, we added
this question to the list.
Of course, this question is meant to figure out whether the candidate understands how
containerization works. Instead of asking How do containers work? or What is a
Docker image?, the answer to the question above will inform you whether the person
gets it. Other questions may include How does container linking work? or How
and why would you optimize a Dockerfile?
This is another question meant to gauge the candidates system understanding and
Linux expertise.
A good candidate will be able to detail the correct order and significance of at least
some of the various stages (e.g., BIOS, MBR, bootloader, kernel, initialization, and
runlevel). To drill down further, Id recommend a follow-up question such as What
information needs to be provided to the bootloader?
Many candidates will not know the answer to this question while others will offer
only a partial answer. A good way to separate the DevOps wheat from the chaff is to
see if the candidate only explains that the command prints the route that packets take
to the network host or if he or she also delves into the how.
Even if you do not receive a correct and complete answer, this question is a good
starting point for a deeper conversation in which you can brainstorm with the
candidate. In this process, you can try to come up with valid possibilities and discount
invalid ones based on a solid understanding of IP routing.
Another example of a good networking question that I often use: What is the
difference between trying to connect to a port that is not being listened to as opposed
to one that is firewalled in terms of TCP?
This question enables you to learn whether the candidate understands the meaning of
load average in the first place. If they understand and explain that it is not CPU usage,
it is a great opening for a deeper discussion on troubleshooting performance.
Useful follow-ups: Is it possible to observe high load with low CPU usage? If so,
what may be the reasons? and How would you check?
The main idea of the FizzBuzz test is to see how a developer handles an easy coding
task. Live simulations are a good way to see how quick engineers are on their feet as
well as how they grasp a simple task and then translates it into code.
Write a program or script that prints out the numbers between 1 and
100
For each number that is divisible by three, Fizz is printed
For each number that is divisible by five, Buzz is printed
For each number that is divisible by both three and five, FizzBuzz is
printed
Most good developers should be able to write such a program on paper within a
couple of minutes. See how they write the code, ask them why they wrote specific
parts in certain ways, and then check the validity of the code.
DevOps is a newly emerging term in IT field, which is nothing but a practice that emphasizes the collaboration and
communication of both software developers and other information-technology (IT) professionals. It aims at
establishing a culture and environment where building, testing, and releasing software, can happen rapidly,
frequently, and more reliably.
DevOps focuses on 4 primary areas within IT?
1. Culture.
2. Organization (style including roles).
3. Processes.
4. Tools.
What are the nine things that make up Dev & Ops?
It is a newly emerging term in IT field, which is nothing but a practice that emphasizes the
collaboration and communication of both software developers and other information-
technology (IT) professionals. It focuses on delivering software product faster and lowering
the failure rate of releases.
Infrastructure as code
Continuous deployment
Automation
Monitoring
Security
3) What are the core operations of DevOps with application development and with
infrastructure?
Application development
Code building
Code coverage
Unit testing
Packaging
Deployment
With infrastructure
Provisioning
Configuration
Orchestration
Deployment
4) Explain how Infrastructure of code is processed or executed in AWS?
In AWS,
A simpler scripting language will be better for a DevOps engineer. Python seems to be very
popular.
DevOps can be helpful to developers to fix the bug and implement new features quickly. It
also helps for clearer communication between the team members.
Jenkins
Nagios
Monit
ELK (Elasticsearch, Logstash, Kibana)
io
Jenkins
Docker
Ansible
Git
Collectd/Collectl
8) Mention at what instance have you used the SSH?
I have used SSH to log into a remote machine and work on the command line. Beside this,
I have also used it to tunnel into the system in order to facilitate secure encrypted
communications between two untrusted hosts over an insecure network.
GET
HEAD
PUT
POST
PATCH
DELETE
TRACE
CONNECT
OPTIONS
11) Explain what would you check If a Linux-build-server suddenly starts getting slow?
If a Linux-build-server suddenly starts getting slow, you will check for following three things
Application Level
troubleshooting RAM related issues, Disk I/O read write issues, Disk Space related Issues, etc.
Check for Application log file OR application server log file, system performance issues,
Log check HTTP, tomcat log, etc. or check jboss, weblogic logs to see if the application
System Level troubleshooting response/receive time is the issues for slowness, Memory Leak of any application
Dependent Services
troubleshooting Antivirus related issues, Firewall related issues, Network issues, SMTP server response ti
12) Whether your video card can run Unity how would you know?
1 /usr/lib/nux/unity_support_test-p
it will give detailed output about Unitys requirements and if they are met, then your video
card can run unity.
14) What is the quicker way to open an Ubuntu terminal in a particular directory?
To open Ubuntu terminal in a particular directory you can use custom keyboard short cut.
To do that, in the command field of a new custom keyboard , type genome terminal
working directory = /path/to/dir.
15) Explain how you can get the current color of the current screen on the Ubuntu desktop?
You can open the background image in The Gimp (image editor) and then use the dropper
tool to select the color on the specific point. It gives you the RGB value of the color at that
point.
Memcache helps in
CAS Tokens: A CAS token is attached to any object retrieved from cache. You can use
that token to save your updated object.
Callbacks: It simplifies the code
getDelayed: It reduces the delay time of your script which is waiting for results to come
back from server
Binary protocol: You can use binary protocol instead of ASCII with the newer client
Igbinary: Previously, client always used to do serialization of the value with complex data,
but with Memcached you can use igbinary option.
19) Explain whether it is possible to share a single instance of a Memcache between multiple
projects?
20) You are having multiple Memcache servers, in which one of the memcacher server fails,
and it has your data, will it ever try to get key data from that one failed server?
The data in the failed server wont get removed, but there is a provision for auto-failure,
which you can configure for multiple nodes. Fail-over can be triggered during any kind of
socket or Memcached server level errors and not during normal client errors like adding an
existing key, etc.
21) Explain how you can minimize the Memcached server outages?
When one instance fails, several of them goes down, this will put larger load on the
database server when lost data is reloaded as client make a request. To avoid this, if your
code has been written to minimize cache stampedes then it will leave a minimal impact
Another way is to bring up an instance of Memcached on a new machine using the lost
machines IP address
Code is another option to minimize server outages as it gives you the liberty to change
the Memcached server list with minimal work
Setting timeout value is another option that some Memcached clients implement for
Memcached server outage. When your Memcached server goes down, the client will keep
trying to send a request till the time-out limit is reached
22) Explain how you can update Memcached when data changes?
When data changes you can update Memcached by
Clearing the Cache proactively: Clearing the cache when an insert or update is made
Resetting the Cache: It is similar to the first method but rather than just deleting the keys
and waiting for the next request for the data to refresh the cache, reset the values after the
insert or update.
23) Explain what is Dogpile effect? How can you prevent this effect?
Dogpile effect is referred to the event when cache expires, and websites are hit by the
multiple requests made by the client at the same time. This effect can be prevented by
using semaphore lock. In this system when value expires, first process acquires the lock
and starts generating new value.
25) When server gets shut down does data stored in Memcached is still available?
Data stored in Memcached is not durable so if server is shut down or restarted then all the
data stored in Memcached is deleted.
GIT is a very popular version control tool in software community. Many fortune 500 organizations
use GIT. This book contains basic to expert level GIT interview questions that an interviewer asks.
Each question is accompanied with an answer so that you can prepare for job interview in short
time.
We have compiled this list of GIT questions after attending dozens of technical interviews in top-
notch companies like- Google, Facebook, Ebay, Amazon etc.
https://www.amazon.com/Top-100-Interview-Questions-Answers-ebook/dp/B01L80Z02S
1. What is ANT?
Ans. ANT full form is Another Needed Tool. Ant is a build tool that is java based. A build tool
performs the following tasks:
Ques 1. How many messaging models do JMS provide for and what are they?
Ans. JMS provide for two messaging models, publish-and-subscribe and point-to-point queuing.
Ans. JMS is an acronym used for Java Messaging Service. It is Java's answer to creating software
using asynchronous messaging. It is one of the official specifications of the J2EE technologies
and is a key technology.
Ans. In RPC the method invoker waits for the method to finish execution and return the control
back to the invoker. Thus it is completely synchronous in nature. While in JMS the message
sender just sends the message to the destination and continues it's own processing. The sender
does not wait for the receiver to respond. This is asynchronous behavior.
Ans. JMS is asynchronous in nature. Thus not all the pieces need to be up all the time for the
application to function as a whole. Even if the receiver is down the MOM will store the messages
on it's behalf and will send them once it comes back up. Thus at least a part of application can
still function as there is no blocking.
Ques 5. What are the different types of messages available in the JMS API?
Ans. Publish and Subscribe i.e. pub/suc and Point to Point i.e. p2p.
Ans. A topic is typically used for one to many messaging i.e. it supports publish subscribe model
of messaging. While queue is used for one-to-one messaging i.e. it supports Point to Point
Messaging.
Ans. Message is a light weight message having only header and properties and no payload.
Thus if theIf the receivers are to be notified abt an event, and no data needs to be exchanged
then using Message can be very efficient.
Ques 9. What is the basic difference between Publish Subscribe model and P2P model?
Ans. Publish Subscribe model is typically used in one-to-many situation. It is unreliable but very
fast. P2P model is used in one-to-one situation. It is highly reliable.
Ans. TextMessage contains instance of java.lang.String as it's payload. Thus it is very useful for
exchanging textual data. It can also be used for exchanging complex character data such as
an XML document.
Ans. An implementation of the JMS interface for a Message Oriented Middleware (MOM).
Providers are implemented as either a Java JMS implementation or an adapter to a non-Java
MOM.
Ans. An object that contains the data being transferred between JMS clients.
Ans. A staging area that contains messages that have been sent and are waiting to be read.
Note that, contrary to what the name queue suggests, messages don't have to be delivered in
the order sent. If the message driven bean pool contains more than one instance then
messages can be processed concurrently and thus it is possible that a later message is
processed sooner than an earlier one. A JMS queue guarantees only that each message is
processed only once.
Ans. A distribution mechanism for publishing messages that are delivered to multiple subscribers.
Ques 19. What is JMS?
Ans. Java Message Service (JMS): An interface implemented by most J2EE containers to provide
point-to-point queueing and topic (publish/subscribe) behavior. JMS is frequently used by EJB's
that need to start another process asynchronously.
For example, instead of sending an email directly from an Enterprise JavaBean, the bean may
choose to put the message onto a JMS queue to be handled by a Message-Driven Bean
(another type of EJB) or another system in the enterprise. This technique allows the EJB to return
to handling requests immediately instead of waiting for a potentially lengthy process to
complete.
Ques 21. How may messaging models do JMS provide for and what are they?
Ans. JMS provides for two messaging models, publish-and-subscribe and point-to-point queuing.
Ans. A point-to-point model is based on the concept of a message queue: Senders send
messages into the queue, and the receiver reads messages from this queue. In the point-to-point
model, several receivers can exist, attached to the same queue. However, (Message Oriented
Middleware)MOM will deliver the message only to one of them. To which depends on the MOM
implementation.
Ans. A publish-subscribe model is based on the message topic concept: Publishers send
messages in a topic, and all subscribers of the given topic receive these messages.
Ans. With publish/subscribe message passing the sending application/client establishes a named
topic in the JMS broker/server and publishes messages to this queue. The receiving clients
register (specifically, subscribe) via the broker to messages by topic; every subscriber to a topic
receives each message published to that topic. There is a one-to-many relationship between
the publishing client and the subscribing clients.
Ans. The JMS provider handles security of the messages, data conversion and the client
triggering. The JMS provider specifies the level of encryption and the security level of the
message, the best data type for the non-JMS client.
Ques 29. What is the diffrence between Java Mail and JMS Queue?
Ans. JMS is the ideal high-performance messaging platform for intrabusiness messaging, with full
programmatic control over quality of service and delivery options.
JavaMail provides lowest common denominator, slow, but human-readable messaging using
infrastructure already available on virtually every computing platform.
Ans. JMS specification defines a transaction mechanisms allowing clients to send and receive
groups of logically bounded messages as a single unit of information. A Session may be marked
as transacted. It means that all messages sent in a session are considered as parts of a
transaction. A set of messages can be committed (commit() method) or rolled back (rollback()
method). If a provider supports distributed transactions, it's recommended to use XAResource
API.
Ans. Synchronous messaging involves a client that waits for the server to respond to a message.
So if one end is down the entire communication will fail.
Ques 32. What is asynchronous messaging?
Ans. Asynchronous messaging involves a client that does not wait for a message from the server.
An event is used to trigger a message from a server. So even if the client is down , the messaging
will complete successfully.
Ans. A single-threaded context for sending and receiving JMS messages. A JMS session can be
nontransacted, locally transacted, or participating in a distributed transaction.
Ques 35. What is the use of JMS? In which situations we are using JMS? Can we send
message from one server to another server using JMS?
Ans. JMS is the ideal high-performance messaging platform for intrabusiness messaging, with full
programmatic control over quality of service and delivery options.
Ques 36. What is the difference between durable and non-durable subscriptions?
Ans. Point-To-Point (PTP). This model allows exchanging messages via queues created for some
purposes. A client can send and receive messages from one or several queues. PTP model is
easier than pub/sub model.
A durable subscription gives a subscriber the freedom of receiving all messages from a topic,
whereas a non-durable subscription doesn't make any guarantees about messages sent by
others when a client was disconnected from a topic.
Ques 37. What is the difference between Message producer and Message consumer?
In Point-To-Point model:
In point to point messaging systems, messages are routed to an individual consumer which
maintains a queue of "incoming" messages. Messaging applications send messages to a
specified queue, and clients retrieve messages from a queue.
Ques 38. What is JMS application?
Ans. In RPC the method invoker waits for the method to finish execution and return the control
back to the invoker. Thus it is completely synchronous in nature. While in JMS the message
sender just sends the message to the destination and continues it's own processing. The sender
does not wait for the receiver to respond. This is asynchronous behavior.
Ans. The Java Message Service is a Java API that allows applications to create, send, receive,
and read messages. Designed by Sun and several partner companies, the JMS API defines a
common set of interfaces and associated semantics that allow programs written in the Java
programming language to communicate with other messaging implementations.
The JMS API minimizes the set of concepts a programmer must learn to use messaging products
but provides enough features to support sophisticated messaging applications. It also strives to
maximize the portability of JMS applications across JMS providers in the same messaging
domain.
The JMS API enables communication that is not only loosely coupled but also
* Asynchronous. A JMS provider can deliver messages to a client as they arrive; a client does not
have to request messages in order to receive them.
* Reliable. The JMS API can ensure that a message is delivered once and only once. Lower levels
of reliability are available for applications that can afford to miss messages or to receive
duplicate messages.
The JMS Specification was first published in August 1998. The latest version of the JMS
Specification is Version 1.1, which was released in April 2002. You can download a copy of the
Specification from the JMS Web site, http://java.sun.com/products/jms/.
Ans. The point-to-point model is used when the information is specific to a single client. For
example, a client can send a message for a print out, and the server can send information back
to this client after completion of the print job.
Ans. Messaging lets a servlet delegate processing to a batch process either on the same
machine or on a separate machine. The servlet creates a message and sends it to a queue. The
servlet immediately completes and when the batch process is ready, it processes the message.
Messaging is therefore comprised of three main components:
A Producer creates messages and sends them to a Queue. The Producer could be something
like a Servlet.
A Queue stores the messages from the Produces and provides them to a Consumer when ready.
The Queue is implemented by the messaging provider.
A Consumer processes messages as they become available in the Queue. The Consumer is
typically a bean implementing the MessageListener interface.
Ans. The JMS provider handles security of the messages, data conversion and the client
triggering. The JMS provider specifies the level of encryption and the security level of the
message, the best data type for the non-JMS client.
Ques 48. What is the difference between Byte Message and Stream Message?
Ans. Bytes Message stores data in bytes. Thus the message is one contiguous stream of bytes.
While the Stream Message maintains a boundary between the different data types stored
because it also stores the type information along with the value of the primitive being stored.
Bytes Message allows data to be read using any type. Thus even if your payload contains a long
value, you can invoke a method to read a short and it will return you something. It will not give
you a semantically correct data but the call will succeed in reading the first two bytes of data.
This is strictly prohibited in the Stream Message. It maintains the type information of the data
being stored and enforces strict conversion rules on the data being read.
Ques 49. Are you aware of any major JMS products available in the market?
Ans. IBM's MQ Series is one of the most popular product used as Message Oriented Middleware.
Some of the other products are SonicMQ, iBus etc. Weblogic application server also comes with
built in support for JMS messaging.
Ques 50. What are the different types of messages available in the JMS API?
Ques 51. How Does the JMS API Work with the J2EE Platform?
Ans. When the JMS API was introduced in 1998, its most important purpose was to allow Java
applications to access existing messaging-oriented middleware (MOM) systems, such as
MQSeries from IBM. Since that time, many vendors have adopted and implemented the JMS
API, so that a JMS product can now provide a complete messaging capability for an enterprise.
Since the 1.3 release of the J2EE platform ("the J2EE 1.3 platform"), the JMS API has been an
integral part of the platform, and application developers can use messaging with components
using J2EE APIs ("J2EE components").
The JMS API in the J2EE platform has the following features.
* Application clients, Enterprise JavaBeans (EJB) components, and Web components can send
or synchronously receive a JMS message. Application clients can in addition receive JMS
messages asynchronously. (Applets, however, are not required to support the JMS API.)
* Message-driven beans, which are a kind of enterprise bean, enable the asynchronous
consumption of messages. A JMS provider may optionally implement concurrent processing of
messages by message-driven beans.
* Message sends and receives can participate in distributed transactions.
The JMS API enhances the J2EE platform by simplifying enterprise development, allowing loosely
coupled, reliable, asynchronous interactions among J2EE components and legacy systems
capable of messaging. A developer can easily add new behavior to a J2EE application with
existing business events by adding a new message-driven bean to operate on specific business
events. The J2EE platform's EJB container architecture, moreover, enhances the JMS API by
providing support for distributed transactions and allowing for the concurrent consumption of
messages.
Another J2EE platform technology, the J2EE Connector Architecture, provides tight integration
between J2EE applications and existing Enterprise Information (EIS) systems. The JMS API, on the
other hand, allows for a very loosely coupled interaction between J2EE applications and existing
EIS systems.
At the 1.4 release of the J2EE platform, the JMS provider may be integrated with the application
server using the J2EE Connector Architecture. You access the JMS provider through a resource
adapter. For more information, see the Enterprise JavaBeans Specification, v2.1, and the J2EE
Connector Architecture Specification, v1.5.
Ans. CreateDurableSubscriber and unsubscribe calls require exclusive access to the Topics. If
there are pending JMS operations (send/publish/receive) on the same Topic before these calls
are issued, the ORA - 4020 exception is raised.
There are two solutions to the problem:
1. Try to isolate the calls to createDurableSubscriber and unsubscribe at the setup or cleanup
phase when there are no other JMS operations happening on the Topic. That will make sure that
the required resources are not held by other JMS operational calls. Hence the error ORA - 4020
will not be raised.
2. Issue a TopicSession.commit call before calling createDurableSubscriber and unsubscribe call.
Ans. In addition to granting the roles, you would also need to grant execute to the user on the
following packages:
* grant execute on sys.dbms_aqin to <userid>
* grant execute on sys.dbms_aqjms to <userid>
Ques 59. What are the three components of a Message?
Ans. There are two kinds of Messaging. Synchronous messaging involves a client that waits for
the server to respond to a message. Asynchronous messaging involves a client that does not
wait for a message from the server. An event is used to trigger a message from a server.
Ques 61. What is the difference between Point to Point and Publish/Subscribe?
Ques 62. Why doesnt the JMS API provide end-to-end synchronous message delivery and
notification of delivery?
Ans. Some messaging systems provide synchronous delivery to destinations as a mechanism for
implementing reliable applications. Some systems provide clients with various forms of delivery
notification so that the clients can detect dropped or ignored messages. This is not the model
defined by the JMS API. JMS API messaging provides guaranteed delivery via the once-and-
only-once delivery semantics of PERSISTENT messages. In addition, message consumers can
insure reliable processing of messages by using either CLIENT_ACKNOWLEDGE mode or
transacted sessions. This achieves reliable delivery with minimum synchronization and is the
enterprise messaging model most vendors and developers prefer. The JMS API does not define a
schema of systems messages (such as delivery notifications). If an application requires
acknowledgment of message receipt, it can define an application-level acknowledgment
message.
Ques 63. What are the core JMS-related objects required for each JMS-enabled
application?
Ques 64. How does the Application server handle the JMS Connection?
Ans. 1. App server creates the server session and stores them in a pool.
2. Connection consumer uses the server session to put messages in the session of the JMS.
3. Server session is the one that spawns the JMS session.
4. Applications written by Application programmers creates the message listener.
Ans. Log4j (Log for Java) is logging framework provided by apache foundation for java based
applications.
In the applicaitons, if you want to log some information, like any event triggered, or any
Database updated is happened, we have the need to log the specific information or error for
the useful of the application.
To debug any issues in applications, we have to log the error/exceptions in the logs. For this we
will use log4j mechanism .
Log4j logs the information and shows this information in different targets. The different targets are
called appenders (console, file etc ).
Ans. To define logging for your application, you have to download the log4j framework
(log4j.jar) from the apache site.
Once log4j jars are downloaded, make sure that these jars are in classpath of your application.
let say you have web-application need to be added log4j. In this case, log4j jars are copied to
WEB-INFO/lib folder of your webapplication
create new file either logging.properties or log4j.xml which will be copied to WEB-INF/classes
folder
logging.properties/log4j.xml contains the all the configuration related to logging mechanism
and logger level and package that you want to define logger level.
Example:
logging.properties:
logger.level=INFO
logger.handlers=CONSOLE,FILE,RejRec
handler.RejRec=org.jboss.logmanager.handlers.PeriodicRotatingFileHandler
handler.RejRec.level=WARN
handler.RejRec.formatter=RejRec
handler.RejRec.properties=append,autoFlush,enabled,suffix,fileName
handler.RejRec.append=true
handler.RejRec.autoFlush=true
handler.RejRec.enabled=true
handler.RejRec.suffix=.yyyy-MM-dd
handler.RejRec.fileName=E\:\\Docs\\WithoutBook\\DID\\jboss-eap-
6.2\\standalone\\log\\RejectedRecords.log
log4j.xml:
<log4j:configuration>
<appender-ref ref="LOG"/>
</appender>
<level value="warn"/>
<appender-ref ref="MEETING-APP-LOG"/>
</logger>
<root>
<priority value="debug"/>
<appender-ref ref="ASYNC"/>
<appender-ref ref="ERROR-LOG"/>
</root>
</log4j:configuration>
Ans. There are several logging levels that you can configure in you applicaiton
Those are FATAL,ERROR,WARN,TRACE,DEBUG,INFO OR ALL in apache logging. Default logging
level is INFO.
2.Application logs:- We can define logging at each applicaiton level, For this we have to create
log4j.xml or logging.properties in WEB-INF/classes folder.
Ans. There are different appenders that we can configure in log4j are: CONSOLE, FILES,
DATABASE, JMS, EVENT LOGGING
CONSOLE in log4j:- If we use this as appenders in your application, log4j logs the information in
the console or command prompt window that is started with startup script.
Files in log4j:- Files Appender is used to log the information into our custom name files. when we
are configuring this appender, we can specify the file name.
Ans. Logging helps in debugging as well. Although debuggers are available but frankly it takes
time to debug an application using a debugger. An application can be debugged more easily
with few well-placed logging messages. So we can safely say that logging is a very good
debugging tool. If logging is done sensibly and wisely, it can provide detailed context for
application failures.
In distributed applications (e.g. web/remote applications), the logging is very important. The
administrator can read logs to learn about the problems that occurred during some interval.
Java's built-in APIs provide logging options but they are not that flexible. Another option is to use
Apaches open source logging framework called log4j.
These are important points because, we ultimately want efficient applications. Log4J has the
solution to this. You may turn on or turn off the logging at runtime by changing the configuration
file. This means no change in the Java source code (binary).
To reduce the code now in Spring Framework you can use of AOP where you can configure log
mechanisms easily.
Ans. Before using log4j framework, one should be aware of different categories of log messages.
Following are 5 categories:
DEBUG
The DEBUG Level is used to indicate events that are useful to debug an application. Handling
method for DEBUG level is: debug().
INFO
INFO level is used to highlight the progress of the application. Handling method for INFO level is:
info().
WARN
The WARN level is used to indicate potentially harmful situations. Handling method for WARN
level is: warn().
ERROR
The ERROR level shows errors messages that might not be serious enough and allow the
application to continue. Handling method for ERROR level is: error().
FATAL
The Fatal level is used to indicate severe events that will may cause abortion of the application.
Handling method for FATAL level is: fatal().
If you declare log level as debug in the configuration file, then all the other log messages will
also be recorded.
If you declare log level as info in the configuration file, then info, warn, error and fatal log
messages will be recorded.
If you declare log level as warn in the configuration file, then warn, error and fatal log messages
will be recorded.
If you declare log level as error in the configuration file, then error and fatal log messages will be
recorded.
If you declare log level as fatal in the configuration file, then only fatal log messages will be
recorded.
Ans. There are 3 main components that are used to log messages based upon type and level.
These components also control the formatting and report place at runtime. These components
are:
- loggers
- appenders
- layouts
Ques 2. How many different types of JDBC drivers are present? Discuss them.
Ans. Basically starting JBoss with the \'all\' configuration contains everything needed for
clustering:
It has all the libraries for clustering:
JGroups.jar, jboss-cache.jar
HA-JNDI
Farming
HA-JMS
Ans. The JGroups framework provides services to enable peer-to-peer communications between
nodes in a cluster. It is built on top a stack of network communication protocols that provide
transport, discovery, reliability and failure detection, and cluster membership management
services.
Ques 3. Is it possible to put a JBoss server instance into multiple cluster at the same time ?
Ans. It is technically possible to put a JBoss server instance into multiple clusters at the same time,
this practice is generally not recommended, as it increases the management complexity.
Ans. JBossCache enables easy distribution of datasets across your computing environments. It is
based on JGroups and enables clustering and high availability of that data. You may choose to
distribute the data with JBoss Messaging to move it where it is needed for computation or event-
based programming.
Ans. Built on the standards JavaServer Faces and EJB 3.0, JBoss Seam unifies component and
programming models and delivers a consistent and powerful framework for rapid creation of
web applications with Java EE 5.0. Seam simplifies web application development and enables
new functionality that was difficult to implement by hand before, such as stateful conversations,
multi-window operation, and handling concurrent fine-grained AJAX requests. Seam also unifies
and integrates popular open source technologies like Facelets, Hibernate, iText, and Lucene.
Ques 6. Does Seam run on other application servers besides JBoss?
Ans. Seam runs beautifully on other application servers - just like everything else the Hibernate
team does, this is not a JBoss-only thing.
Ans. JBoss JBPM is a platform for process languages. At the base there is a java library to define
and execute graphs. The actual process constructs like e.g. send email, user task and update
database are defined on top of this. Every process language is made up of a set of such process
constructs. And that is what is pluggable in this base library. On top of the JBoss jBPM base
library, there are implemented several process languages as a set of process constructs: jPDL,
BPEL and SEAM pageflow:
jPDL is a process language with a clean interface to Java and very sophisticated task
management capabilities. There is no standard for Java process languages, so it is proprietary.
BPEL is a service orchestration language. As said before, in BPEL, you can write new services as a
function of other services. This is typically a component of an Enterprise Service Bus (ESB).
SEAM pageflow is a language that allows for the graphically define the navigation between
pages in a SEAM web application.
Ans. JBoss is a popular open source application server based on Java EE technology. Being
Java EE based, the JBoss supports cross-platform java applications. It was embedded with
Apache Tomcat web server. It runs under any JVM of 1.3 or later versions. JBoss supports JNDI,
Servlet/JSP (Tomcat or Jetty), EJB, JTS/JTA, JCA, JMS, Clustering (JavaGroups), Web Services
(Axis), and IIOP integration (JacORB).
Ques 10. What version of JBoss AS do I need to run Seam?
Ans. For Seam 1.3: Seam was developed against JBoss 4.2. Seam can still be run against JBoss
4.0. The seam documentation contains instructions for configuring JBoss 4.0.
For Seam 1.2: Since Seam requires the latest edition of EJB3, you need to install JBoss AS from the
latest JEMS installer. Make sure that you select the ejb3 or ejb3+clustering profile to include
EJB3 support. Also, the jboss-seam.jar library file from the Seam distribution must be included in
each Seam application you deploy. Refer to examples in Seam distribution (inside the examples
directory) to see how to build and package Seam applications.
Ans. Yes, you can run Seam applications in plain Tomcat 5.5+ or in the Sun GlassFish application
server. To run Seam application in Tomcat, you need a number of additional library files and a
few configuration files to bootstrap the JBoss EJB3 inside Tomcat. Please refer to the
deploy.tomcat ANT build target for the Seam booking example (in the examples/booking
directory of the Seam distribution) for more on how to build a Tomcat WAR for Seam
applications. Refer to this blog post on how to run Seam in Suns Glassfish application server.
Ans. Yes, as of Seam 1.1, you can use Seam in any J2EE application server, with one caveat: you
will not be able to use EJB 3.0 session beans. However, you can use either Hibernate or JPA for
persistence, and you can use Seam JavaBean components instead of session beans.
Ques 13. Can I run Seam with JDK 1.4 and earlier?
Ans. No, Seam only works on JDK 5.0 and above. It uses annotations and other JDK 5.0 features.
Ans. here is a nutshell summary from the example given in the .org quick start:
so the start script that you would run on your second machine for the second node would look
like:
$ ./run.sh -c all-node2 -g DocsPartition -u 239.255.100.100 -b 192.168.0.102 -
Djboss.messaging.ServerPeerID=2
you can also use nohup to send the process to the background...
now that both nodes should be running, you have to enable and configure the sticky sessions on
thw webserver and each server.
Ques 2. What are the components of logical database structure of Oracle database?
Ans. A database is divided into Logical Storage Unit called tablespaces. A tablespace is used to
grouped related logical structures together.
Ans. Every Oracle database contains a tablespace named SYSTEM, which is automatically
created when the database is created. The SYSTEM tablespace always contains the data
dictionary tables for the entire database.
Ans. Schema objects are the logical structures that directly refer to the database's data.
Schema objects include tables, views, sequences, synonyms, indexes, clusters, database
triggers, procedures, functions packages and database links.
Ans. A table is the basic unit of data storage in an Oracle database. The tables of a database
hold all of the user accessible data. Table data is stored in rows and columns.
Ans. A view is a virtual table. Every view has a query attached to it. (The query is a SELECT
statement that identifies the columns and rows of the table(s) the view uses.)
Ans. Clusters are groups of one or more tables physically stores together to share common
columns and are often used together.
Ques 26. What is an Integrity Constrains?
Ans. An integrity constraint is a declarative way to define a business rule for a column of a table.
Ans. An Index is an optional structure associated with a table to have direct access to rows,
which can be created to increase the performance of data retrieval. Index can be created on
one or more columns of a table.
Ans. SDLC serves as a guide to the project and provides a flexible and consistent medium to
accommodate changes, and perform the project to meet clients objectives. SDLC phases
define key schedule and delivery points which ensure timely and correct delivery to the client
within budget and other constraints and project requirements. SDLC co-operates project control
and management activities as they must be introduced within each phase of SDLC.
Ans.
Ques 4. What is SDLC model? What are the most well known SDLC models?
Ans.
An SDLC model defines implementation of an approach to the project. It defines the various
processes, and phases that would be carried out throughout the project to produce the desired
output. There are a variety of SDLC models that exist catering to different needs and
characteristics of a project. Some are of iterative nature (Prototyping), whereas some are
sequential (waterfall). Some of the well known SDLC models are:
Waterfall Model
Iterative Model
Spiral Model
V-Model
RAD Model
Agile Model
Ans.
Waterfall is a sequential and non iterative SDLC model which describes flowing of phases downwards one by one.
The process does not start a phase unless the previous phase is completed once and for all completely. The waterfall
model consists of the following phases:
Requirements gathering
Design
Implementation
Testing
Maintenance
Ans. Requirements gathering: All the requirements are gathered and analysis is performed for the
complete system.
Design: Various design models are created for the complete system after the requirements gathering phase has
been completed and ended.
Implementation: The complete system is implemented once the design for the system has been frozen.
Testing: The complete system is tested after all the construction and integration has completed.
Maintenance: Post implementation support carries out after implementation of the system.
Ans.
Ans.
The V-shaped SDLC model is an extension of the waterfall model. The typical waterfall moves linearly downwards,
whereas, in V-shaped model phases are turned upwards after coding phase to form the V shape. It demonstrates
relationship between each phase of SDLC and its respective testing phase. Unlike waterfall model, the V-Shape
includes early test planning.
Ans.
Phases in V-Shaped model:
Verification phases are on the left side of the V-shape. It consists of:
Requirements analysis: Requirements are gathered and analysis is performed to understand the
problem and propose a solution.
System Design: Engineers analyze the requirements gathered and propose ways the system can
be created or built from a feasibility point of view.
Architecture design: Architecture of the system is designed consisting of various modules,
depicting their relationships and communication between them.
Module design: This is a low level design where modules are designed individually and in a
detailed manner.
Coding: This is at the bottom of the V-Shape model. Module design is converted into code by
developers.
Validation phases are on the right side of the V-shape. It consists of:
Unit testing: Testing by analysis of the code by developers for their independent modules is done.
Integration testing: Independent modules are tested together to validate interface and expose
errors in them.
System testing: The system is tested against the system specifications.
User Acceptance testing: Testing is performed by end users to validate that the requirements
mentioned in requirements phase have been met by the system or not before accepting it for
production.
Ans.
Ans. The V-shaped model should be used for small to medium sized projects where requirements
are clearly defined and fixed. The model accommodates more planning for test than waterfall
but makes accommodation of changes harder than other models. The V-Shaped model should
be chosen when ample technical resources are available with needed technical expertise.
Since, no prototypes are produced, there is a very high risk involved in meeting customer
expectations, therefore, confidence of customer should be very high in order for choosing the V-
Shaped model approach.
Ans. Prototype SDLC models is based upon creation of a software prototype of the complete
system and then refine and review it continuously till the complete acceptable system is built.
Ans.
Ans.
a) Focusing on the prototype can mislead developers from understanding the actual desired
system.
b) End users get confused, believing the prototype to be the complete system
c) Developers might misunderstand end users objectives.
d) Developer might get too involved in prototype and deviate from the actual system that the
prototype must be converted into.
e) Expensive as prototypes need a lot of effort and time. It may take a lot of work to be done for
very less needed work to be achieved.
Ans. Prototype model should be used when the desired system needs to have a lot of
interaction with the end users. Typically, online systems, web interfaces have a very high amount
of interaction with end users, are best suited for Prototype model. It might take a while for a
system to be built that allows ease of use and needs minimal training for the end user.
Prototyping ensures that the end users constantly work with the system and provide a feedback
which is incorporated in the prototype to result in a useable system. They are excellent for
designing good human computer interface systems.
Ques 20. Describe rapid application development (RAD) software development life cycle
model.
Ans. RAD involves iterative development along with creation of prototypes. It uses interactive
use of techniques and prototypes to define users requirements and system design clearly.
Structured techniques are used to create initial design models based on user input and
prototypes are built on top of that. The end users and analysts use the prototypes to validate
and enhance the requirements and design models. The process lasts till a set of final technical
requirements and design models have been created.
Ques 21. Briefly describe the phases in the rapid application development (RAD) model.
Ans.
Phases in RAD:
Business modeling: The information flow is identified between various business functions.
Data modeling: Information gathered from business modeling is used to define data objects that are needed for the
business.
Process modeling: Data objects defined in data modeling are converted to achieve the business information flow to
achieve some specific business objective. Description are identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process models into code and the actual system.
Testing and turnover: Test new components and all the interfaces
Ques 22. Explain the strengths of the rapid application development (RAD) model.
Ans.
Strengths of RAD:
Ans.
Weaknesses of RAD:
a) Depends on strong team and individual performances for identifying business requirements.
b) Only system that can be modularized can be built using RAD
c) Requires highly skilled developers/designers.
d) High dependency on modeling skills
e) Inapplicable to cheaper projects as cost of modeling and automated code generation is
very high for cheaper budgeted projects to befit.
Ques 24. Explain when to use the rapid application development (RAD) model.
Ans. RAD should be used when there is a need to create a system that can be modularized in 2-
3 months of time. It should be used if theres high availability of designers for modeling and the
budget is high enough to afford their cost along with the cost of automated code generating
tools. RAD SDLC model should be chosen only if resources with high business knowledge are
available and there is a need to produce the system in a short span of time (2-3 months).
Ans. Incremental SDLC approach suggests construction of a partial system rather than the
complete system and then builds more functionality into it. Requirements and features are
prioritized and categorized and then implemented in phases, each phase based on the
waterfall model. The process continues till the complete system is achieved.
Ans. Phases of incremental model are same as waterfall i.e. Requirements, design,
implementation, testing, maintenance. However, instead of following the waterfall once and for
all linearly, incremental model takes a different approach. In this phases are repeated
incrementally as business value is delivered incrementally as well.
For every single phase and increment a waterfall model is followed. The waterfall model is then
put in a cycle of increments along with verification of requirements, and design.
Ans.
Ans. The spiral SDLC model combines components of both design and prototype in phases. Its a
hybrid of waterfall and prototyping model. One should use spiral SDLC model for large and
expensive projects.
Ans.
Phases in spiral model:
Ans.
Ans.
Ans. There is no specific SDLC model that can be used for all types of projects and situations. If
none of the popular SDLC models suit for a specific project then, pick the closest matching SDLC
model and modify it as per needs. Identify how important is risk assessment and use spirals risk
assessment methodology if its a risk critical project. Project should be delivered in small chunks,
ideally merging the incremental model with V-shaped model. One must spend ample time in
choosing the right model or customizing one to suit a project for its successful and efficient
completion.
Ques 36. Describe the importance of selecting team members with a mix of personality
types for software development.
Ans.
Choose or building the right team is vital for the success of any project. A project needs a variety
of skills and qualities which are not present in any individual. However, as a workaround, a team
should be built of people with a variety of skill sets to fulfill the project need. The main advantage
of choosing team members with a mix of personality types is that it provides a wider range of
views towards a project or any specific action item in the project, e.g. : requirements, design,
development, testing or even implementation. Different views allow for a broader angle to a
problem and solution minimizing the risk of missing requirements or misunderstanding them.
Some of the personality traits that are essential to any project are:
a) Aggressive go getter, contrary, a calm patient and more laid back personality
b) Risk taker, contrary, a cautious personality
c) Strategic, contrary, analytical personality
d) Lateral thinking
Different situations in a project are handled better by different personality types and hence a
perfect blend/mix of personality types is essential for the project to complete successfully.
Ques 37. Describe the phases of team development in SDLC.
Ans.
Ques 38. What is the difference between an Iterative model and the Waterfall model?
Ans. Waterfall Model is a flow based model, in which we pass every phase once, and can not
go back to that phase again. Its most eminent drawback is that if there is any change in
requirements, we cannot make any changes to the requirement section. Iterative Model is
somewhat similar to waterfall model but herein we can always come back to previous phases,
and make the changes accordingly.
Ans. SDLC is a software development life Cycle model which is utilized for project management
and involves processes from the feasibility Analysis to maintenance of the completed
application. STLC is Software testing Life cycle and SDLC work closely together and are almost
inseparable under some of the activities. However the stages are very different under sdlc and
stlc
Ans. Functional requirement is a document which contains what a certain system has to do to
achieve a certain specific objective.This task is carried out during the preliminary stage of SDLC.
Ans. Without Non-functional, a software will never function or will have vital missing information in
its output. Response time, security, reliability, accuracy, capacity and availability are examples
of Non functional requirement for a software development process. Non functional requirements
decides how the Program or the software will function in future.
Ques 42. What is the difference between Incremental model and Spiral model?
Ans. There is not much difference between these two sdlc models. Sdlc spiral Model includes the
iterative nature of the prototyping model and the linear nature of the waterfall model. This
approach is ideal for developing software that is revealed in various versions.
Ques 43. Give some practical real life examples of Spiral Model.
Ans. The most popular real life examples for sdlc Spiral model are Microsoft Windows operating
System, Visual Studio Manager, Adobe Photoshop, WordPress CMS and many more.
Ans. Agile methodology is way too advanced and complex than the simple Waterfall model.
The feasibility of agile to reshape the entire development structure to suit the most effective
outcome is what makes Agile the number 1 choice of developers today.
Ans. Of course. There is no hard and fast requirements for a developer to implement any sdlc
model for developing a software project. The ability to simplify project into modules and
ascertain correct progression for completion is the only reason for which sdlc models and
methodology was designed in the first place. You can sure work without them but the
challenges will be more and there wont be any specific process to organize your work as a
whole.