0% found this document useful (0 votes)
535 views

Cosmos DB Interview Questions

Azure Cosmos DB is a globally distributed, multi-model database service that offers querying over schema-free data with high availability and low latency. It provides automatic scaling, rich querying, and configurable throughput. Azure Cosmos DB supports document, key-value, wide-column, and graph databases. It uses request units to provide predictable performance and can scale throughput independently for each database. Developers can access Azure Cosmos DB using SQL API, MongoDB API, Graph API, Table API or REST endpoints.

Uploaded by

Reema Philips
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
535 views

Cosmos DB Interview Questions

Azure Cosmos DB is a globally distributed, multi-model database service that offers querying over schema-free data with high availability and low latency. It provides automatic scaling, rich querying, and configurable throughput. Azure Cosmos DB supports document, key-value, wide-column, and graph databases. It uses request units to provide predictable performance and can scale throughput independently for each database. Developers can access Azure Cosmos DB using SQL API, MongoDB API, Graph API, Table API or REST endpoints.

Uploaded by

Reema Philips
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 8

Cosmos DB

https://www.wisdomjobs.com/e-university/azure-cosmos-db-interview-questions.html

Question 1. What Is Azure Cosmos Db?

Answer:
Azure Cosmos DB is a globally replicated, multi-model database service that that offers rich querying over schema-free data, helps deliver
configurable and reliable performance, and enables rapid development. It's all achieved through a managed platform that's backed by the power and
reach of Microsoft Azure.
Azure Cosmos DB is the right solution for web, mobile, gaming, and IoT applications when predictable throughput, high availability, low latency, and
a schema-free data model are key requirements. It delivers schema flexibility and rich indexing, and it includes multi-document transactional
support with integrated JavaScript.
Question 2. What Happened To Document Db?
Answer:
The DocumentDB API is one of the supported APIs and data models for Azure Cosmos DB. In addition, Azure Cosmos DB supports you with Graph
API (Preview), Table API and MongoDB API. 
Question 3. How Do I Get To My Documentdb Account In The Azure Portal?
Answer:
In the Azure portal, click the Azure Cosmos DB icon in the left pane. If you had a DocumentDB account before, you now have an Azure Cosmos DB
account, with no change to your billing.
Question 4. What Are The Typical Use Cases For Azure Cosmos Db?
Answer:
Azure Cosmos DB is a good choice for new web, mobile, gaming, and IoT applications where automatic scale, predictable performance, fast order of
millisecond response times, and the ability to query over schema-free data is important. Azure Cosmos DB lends itself to rapid development and
supporting the continuous iteration of application data models. Applications that manage user-generated content and data are common use cases for
Azure Cosmos DB.
Question 5. How Does Azure Cosmos Db Offer Predictable Performance?
Answer:
A request unit (RU) is the measure of throughput in Azure Cosmos DB. A 1-RU throughput corresponds to the throughput of the GET of a 1-KB
document. Every operation in Azure Cosmos DB, including reads, writes, SQL queries, and stored procedure executions, has a deterministic RU value
that's based on the throughput required to complete the operation. Instead of thinking about CPU, IO, and memory and how they each affect your
application throughput, you can think in terms of a single RU measure.
You can reserve each Azure Cosmos DB container with provisioned throughput in terms of RUs of throughput per second. For applications of any
scale, you can benchmark individual requests to measure their RU values, and provision a container to handle the total of request units across all
requests. You can also scale up or scale down your container's throughput as the needs of your application evolve.
Question 6. How Does Azure Cosmos Db Support Various Data Models Such As Key/value, Columnar, Document and Graph?
Answer:
Key/value (table), columnar, document and graph data models are all natively supported because of the ARS (atoms, records and sequences) design
that Azure Cosmos DB is built on. Atoms, records, and sequences can be easily mapped and projected to various data models. The APIs for a subset
of models are available right now (DocumentDB, MongoDB, Table, and Graph APIs) and others specific to additional data models will be available in
the future.
Azure Cosmos DB has a schema agnostic indexing engine capable of automatically indexing all the data it ingests without requiring any schema or
secondary indexes from the developer. The engine relies on a set of logical index layouts (inverted, columnar, tree) which decouple the storage
layout from the index and query processing subsystems. Cosmos DB also has the ability to support a set of wire protocols and APIs in an extensible
manner and translate them efficiently to the core data model (1) and the logical index layouts (2) making it uniquely capable of supporting multiple
data models natively.
Question 7. Is Azure Cosmos Db Hipaa Compliant?
Answer:
Yes, Azure Cosmos DB is HIPAA-compliant. HIPAA establishes requirements for the use, disclosure, and safeguarding of individually identifiable
health information.
Question 8. What Are The Storage Limits Of Azure Cosmos Db?
Answer:
There is no limit to the total amount of data that a container can store in Azure Cosmos DB.
Question 9. What Are The Throughput Limits Of Azure Cosmos Db?
Answer:
There is no limit to the total amount of throughput that a container can support in Azure Cosmos DB. The key idea is to distribute your workload
roughly evenly among a sufficiently large number of partition keys.
Question 10. How Much Does Azure Cosmos Db Cost?
Answer:
For details, refer to the Azure Cosmos DB pricing details page. Azure Cosmos DB usage charges are determined by the number of provisioned
containers, the number of hours the containers were online, and the provisioned throughput for each container. The term containers here refers to
the DocumentDB API collection, Graph API graph, MongoDB API collection, and Table API tables.
Question 11. Is A Free Account Available?
Answer:
Yes, you can sign up for a time-limited account at no charge, with no commitment. To sign up, visit Try Azure Cosmos DB for free or read more in
the Try Azure Cosmos DB FAQ.
If you are new to Azure, you can sign up for an Azure free account, which gives you 30 days and and a credit to try all the Azure services. If you
have a Visual Studio subscription, you are also eligible for free Azure credits to use on any Azure service. 
You can also use the Azure Cosmos DB Emulator to develop and test your application locally for free, without creating an Azure subscription. When
you're satisfied with how your application is working in the Azure Cosmos DB Emulator, you can switch to using an Azure Cosmos DB account in the
cloud.
Question 12. How Do I Sign Up For Azure Cosmos Db?
Answer:
Azure Cosmos DB is available in the Azure portal. First, sign up for an Azure subscription. After you've signed up, you can add a DocumentDB API,
Graph API (Preview), Table API, or MongoDB API account to your Azure subscription.
Question 13. What Is A Master Key?
Answer:
A master key is a security token to access all resources for an account. Individuals with the key have read and write access to all resources in the
database account. Use caution when you distribute master keys. The primary master key and secondary master key are available on the Keys blade
of the Azure portal.
Question 14. What Are The Regions That Preferred Locations Can Be Set To?
Answer:
The Preferred Locations value can be set to any of the Azure regions in which Cosmos DB is available. For a list of available regions, see Azure
regions.
Question 15. Is There Anything I Should Be Aware Of When Distributing Data Across The World Via The Azure Data Centers?
Answer:
Azure Cosmos DB is present across all Azure regions, as specified on the Azure regions page. Because it is the core service, every new datacenter
has an Azure Cosmos DB presence.
When you set a region, remember that Azure Cosmos DB respects sovereign and government clouds. That is, if you create an account in a sovereign
region, you cannot replicate out of that sovereign region. Similarly, you cannot enable replication into other sovereign locations from an outside
account.
Question 16. How Do I Start Developing Against The Documentdb Api?
Answer:
Microsoft DocumentDB API is available in the Azure portal. First you must sign up for an Azure subscription. Once you sign up for an Azure
subscription, you can add DocumentDB API container to your Azure subscription. For instructions on adding an Azure Cosmos DB account, see
Create an Azure Cosmos DB database account. If you had a DocumentDB account in the past, you now have an Azure Cosmos DB account.
SDKs are available for .NET, Python, Node.js, JavaScript, and Java. Developers can also use the RESTful HTTP APIs to interact with Azure Cosmos
DB resources from various platforms and languages.
Question 17. Can I Access Some Ready-made Samples To Get A Head Start?
Answer:
Samples for the DocumentDB API .NET, Java, Node.js, and Python SDKs are available on GitHub.
Question 18. Does The Documentdb Api Database Support Schema-free Data?
Answer:
Yes, the DocumentDB API allows applications to store arbitrary JSON documents without schema definitions or hints. Data is immediately available
for query through the Azure Cosmos DB SQL query interface.
Question 19. Does The Documentdb Api Support Acid Transactions?
Answer:
Yes, the DocumentDB API supports cross-document transactions expressed as JavaScript-stored procedures and triggers. Transactions are scoped to
a single partition within each collection and executed with ACID semantics as "all or nothing," isolated from other concurrently executing code and
user requests. If exceptions are thrown through the server-side execution of JavaScript application code, the entire transaction is rolled back.
Question 20. What Is A Collection?
Answer:
A collection is a group of documents and their associated JavaScript application logic. A collection is a billable entity, where the cost is determined by
the throughput and used storage. Collections can span one or more partitions or servers and can scale to handle practically unlimited volumes of
storage or throughput.
Collections are also the billing entities for Azure Cosmos DB. Each collection is billed hourly, based on the provisioned throughput and used storage
space.
Question 21. How Do I Create A Database?
Answer:
You can create databases by using the Azure portal, as described in Add a collection, one of the Azure Cosmos DB SDKs, or the REST APIs.
Question 22. How Do I Set Up Users And Permissions?
Answer:
You can create users and permissions by using one of the Cosmos DB API SDKs or the REST APIs.
Question 23. Does The Documentdb Api Support Sql?
Answer:
The SQL query language is an enhanced subset of the query functionality that's supported by SQL. The Azure Cosmos DB SQL query language
provides rich hierarchical and relational operators and extensibility via JavaScript-based, user-defined functions (UDFs). JSON grammar allows for
modeling JSON documents as trees with labeled nodes, which are used by both the Azure Cosmos DB automatic indexing techniques and the SQL
query dialect of Azure Cosmos DB.
Question 24. Does The Documentdb Api Support Sql Aggregation Functions?
Answer:
The DocumentDB API supports low-latency aggregation at any scale via aggregate functions COUNT, MIN, MAX, AVG, and SUM via the SQL
grammar.
Question 25. How Does The Documentdb Api Provide Concurrency?
Answer:
The DocumentDB API supports optimistic concurrency control (OCC) through HTTP entity tags, or ETags. Every DocumentDB API resource has an
ETag, and the ETag is set on the server every time a document is updated. The ETag header and the current value are included in all response
messages. ETags can be used with the If-Match header to allow the server to decide whether a resource should be updated. The If-Match value is
the ETag value to be checked against. If the ETag value matches the server ETag value, the resource is updated. If the ETag is no longer current,
the server rejects the operation with an "HTTP 412 Precondition failure" response code. The client then re-fetches the resource to acquire the current
ETag value for the resource. In addition, ETags can be used with the If-None-Match header to determine whether a re-fetch of a resource is needed.
Question 26. How Do I Perform Transactions In The Documentdb Api?
Answer:
The DocumentDB API supports language-integrated transactions via JavaScript-stored procedures and triggers. All database operations inside scripts
are executed under snapshot isolation. If it is a single-partition collection, the execution is scoped to the collection. If the collection is partitioned,
the execution is scoped to documents with the same partition-key value within the collection. A snapshot of the document versions (ETags) is taken
at the start of the transaction and committed only if the script succeeds. If the JavaScript throws an error, the transaction is rolled back.
Question 27. How Can I Bulk-insert Documents Into Cosmos Db?
Answer:
You can bulk-insert documents into Azure Cosmos DB in either of two ways:
o The data migration tool, as described in Database migration tool for Azure Cosmos DB.
o Stored procedures, as described in Server-side JavaScript programming for Azure Cosmos DB.
Question 28. Does The Documentdb Api Support Resource Link Caching?
Answer:
Yes, because Azure Cosmos DB is a RESTful service, resource links are immutable and can be cached. DocumentDB API clients can specify an "If-
None-Match" header for reads against any resource-like document or collection and then update their local copies after the server version has
changed.
Question 29. Is A Local Instance Of Documentdb Api Available?
Answer:
Yes. The Azure Cosmos DB Emulator provides a high-fidelity emulation of the Cosmos DB service. It supports functionality that's identical to Azure
Cosmos DB, including support for creating and querying JSON documents, provisioning and scaling collections, and executing stored procedures and
triggers. You can develop and test applications by using the Azure Cosmos DB Emulator, and deploy them to Azure at a global scale by making a
single configuration change to the connection endpoint for Azure Cosmos DB.
Question 30. What Is The Azure Cosmos Db Api For Mongodb?
Answer:
The Azure Cosmos DB API for MongoDB is a compatibility layer that allows applications to easily and transparently communicate with the native
Azure Cosmos DB database engine by using existing, community-supported Apache MongoDB APIs and drivers. Developers can now use existing
MongoDB tool chains and skills to build applications that take advantage of Azure Cosmos DB. Developers benefit from the unique capabilities of
Azure Cosmos DB, which include auto-indexing, backup maintenance, financially backed service level agreements (SLAs), and so on.
Question 31. How Do I Connect To My Api For Mongodb Database?
Answer:
The quickest way to connect to the Azure Cosmos DB API for MongoDB is to head over to the Azure portal. Go to your account and then, on the left
navigation menu, click Quick Start. Quick Start is the best way to get code snippets to connect to your database.
Azure Cosmos DB enforces strict security requirements and standards. Azure Cosmos DB accounts require authentication and secure communication
via SSL, so be sure to use TLSv1.2.
Question 32. How Can I Use The Table Api Offering?
Answer:
The Azure Cosmos DB Table API is available in the Azure portal. First you must sign up for an Azure subscription. After you've signed up, you can
add an Azure Cosmos DB Table API account to your Azure subscription, and then add tables to your account.
You can find the supported languages and associated quick-starts in the Introduction to Azure Cosmos DB Table API.
Question 33. Do I Need A New Sdk To Use The Table Api?
Answer:
No, existing storage SDKs should still work. However it is recommended that one always gets the latest SDKs for the best support and in many cases
superior performance. See the list of available languages in the Introduction to Azure Cosmos DB Table API.
Question 34. How Do I Provide Feedback About The Sdk Or Bugs?
Answer:
You can share your feedback in any of the following ways:
o User voice
o MSDN forum
o Stack overflow
Question 35. How Do I Override The Config Settings For The Request Options In The .net Sdk For The Table Api?
Answer:
For information about config settings, see Azure Cosmos DB capabilities. Some settings are handled on the CreateCloudTableClient method and
other via the app.config in the appSettings section in the client application.
Question 36. Are There Any Changes For Customers Who Are Using The Existing Azure Table Storage Sdks?
Answer:
None. There are no changes for existing or new customers who are using the existing Azure Table storage SDKs.
Question 37. Which Tools Work With The Table Api?
Answer:
You can use the Azure Storage Explorer.
Tools with the flexibility to take a connection string in the format specified previously can support the new Table API. A list of table tools is provided
on the Azure Storage Client Tools page.
Question 38. Do Power Shell Or Azure Cli Work With The Table Api?
Answer:
There is support for Power Shell. Azure CLI support is not currently available.
Question 39. Is The Concurrency On Operations Controlled?
Answer:
Yes, optimistic concurrency is provided via the use of the ETag mechanism.
Question 40. Is The Odata Query Model Supported For Entities?
Answer:
Yes, the Table API supports OData query and LINQ query.
Question 41. Can I Connect To Azure Table Storage And Azure Cosmos Db Table Api Side By Side In The Same Application?
Answer:
Yes, you can connect by creating two separate instances of the CloudTableClient, each pointing to its own URI via the connection string.
Question 42. How Do I Migrate An Existing Azure Table Storage Application To This Offering?
Answer:
AzCopy and the Azure Cosmos DB Data Migration Tool are both supported.
Question 43. How Is Expansion Of The Storage Size Done For This Service If, For Example, I Start With N Gb Of Data And My Data Will
Grow To 1 Tb Over Time?
Answer:
Azure Cosmos DB is designed to provide unlimited storage via the use of horizontal scaling. The service can monitor and effectively increase your
storage.
Question 44. How Do I Monitor The Table Api Offering?
Answer:
You can use the Table API Metrics pane to monitor requests and storage usage.
Question 45. How Do I Calculate The Throughput I Require?
Answer:
You can use the capacity estimator to calculate the Table Throughput that's required for the operations. For more information, see Estimate Request
Units and Data Storage. In general, you can represent your entity as JSON and provide the numbers for your operations.
Question 46. Can I Use The Table Api Sdk Locally With The Emulator?
Answer:
Not at this time.
Question 47. Can My Existing Application Work With The Table Api?
Answer:
Yes, the same API is supported.
Question 48. Do I Need To Migrate My Existing Azure Table Storage Applications To The Sdk If I Do Not Want To Use The Table Api
Features?
Answer:
No, you can create and use existing Azure Table storage assets without interruption of any kind. However, if you do not use the Table API, you
cannot benefit from the automatic index, the additional consistency option, or global distribution.
Question 49. How Do I Add Replication Of The Data In The Table Api Across Multiple Regions Of Azure?
Answer:
You can use the Azure Cosmos DB portal’s global replication settings to add regions that are suitable for your application. To develop a globally
distributed application, you should also add your application with the Preferred Location information set to the local region for providing low read
latency.
Question 50. How Do I Change The Primary Write Region For The Account In The Table Api?
Answer:
You can use the Azure Cosmos DB global replication portal pane to add a region and then fail over to the required region. For instructions, see
Developing with multi-region Azure Cosmos DB accounts.
Question 51. How Do I Configure My Preferred Read Regions For Low Latency When I Distribute My Data?
Answer:
To help read from the local location, use the Preferred Location key in the app.config file. For existing applications, the Table API throws an error if
Location Mode is set. Remove that code, because the Table API picks up this information from the app.config file.
Question 52. How Should I Think About Consistency Levels In The Table Api?
Answer:
Azure Cosmos DB provides well-reasoned trade-offs between consistency, availability, and latency. Azure Cosmos DB offers five consistency levels to
Table API developers, so you can choose the right consistency model at the table level and make individual requests while querying the data. When
a client connects, it can specify a consistency level. You can change the level via the consistency Level argument of CreateCloudTableClient.
The Table API provides low-latency reads with "Read your own writes," with Bounded-staleness consistency as the default.
By default, Azure Table storage provides Strong consistency within a region and Eventual consistency in the secondary locations.
Question 53. Does Azure Cosmos Db Table Api Offer More Consistency Levels Than Azure Table Storage?
Answer:
Yes, for information about how to benefit from the distributed nature of Azure Cosmos DB, see Consistency levels. Because guarantees are provided
for the consistency levels, you can use them with confidence.
Question 54. When Global Distribution Is Enabled, How Long Does It Take To Replicate The Data?
Answer:
Azure Cosmos DB commits the data durably in the local region and pushes the data to other regions immediately in a matter of milliseconds. This
replication is dependent only on the round-trip time (RTT) of the datacenter. 
Question 55. Can The Read Request Consistency Level Be Changed?
Answer:
With Azure Cosmos DB, you can set the consistency level at the container level (on the table). By using the .NET SDK, you can change the level by
providing the value for Table ConsistencyLevel key in the app.config file. The possible values are: Strong, Bounded Staleness, Session, Consistent
Prefix, and Eventual. For more information, see Tunable data consistency levels in Azure Cosmos DB. The key idea is that you cannot set the request
consistency level at more than the setting for the table. For example, you cannot set the consistency level for the table at Eventual and the request
consistency level at Strong.
Question 56. How Does The Table Api Handle Failover If A Region Goes Down?
Answer:
The Table API leverages the globally distributed platform of Azure Cosmos DB. To ensure that your application can tolerate datacenter downtime,
enable at least one more region for the account in the Azure Cosmos DB portal developing with multi-region Azure Cosmos DB accounts. You can set
the priority of the region by using the portal developing with multi-region Azure Cosmos DB accounts.
You can add as many regions as you want for the account and control where it can fail over to by providing a failover priority. Of course, to use the
database, you need to provide an application there too. When you do so, your customers will not experience downtime. The latest .NET client SDK is
auto homing but the other SDKs are not. That is, it can detect the region that's down and automatically fail over to the new region.
Question 57. Is The Table Api Enabled For Backups?
Answer:
Yes, the Table API leverages the platform of Azure Cosmos DB for backups. Backups are made automatically.
Question 58. Does The Table Api Index All Attributes Of An Entity By Default?
Answer:
Yes, all attributes of an entity are indexed by default. 
Question 59. Does This Mean I Do Not Have To Create Multiple Indexes To Satisfy The Queries?
Answer:
Yes, Azure Cosmos DB Table API provides automatic indexing of all attributes without any schema definition. This automation frees developers to
focus on the application rather than on index creation and management.
Question 60. Can I Change The Indexing Policy?
Answer:
Yes, you can change the indexing policy by providing the index definition. For more information, see Azure Cosmos DB capabilities. You need to
properly encode and escape the settings.
For the non-.NET SDKs the indexing policy can only be set in the portal at Data Explorer, navigate to the specific table you want to change and then
go to the Scale & Settings->Indexing Policy, make the desired change and then Save.
From the .NET SDK it can be submitted in the app.config file:
Copy
{
  "indexing Mode": "consistent",
  "automatic": true,
  "included Paths": [
  {
      "path": "/some path",
      "indexes": [
    {
          "kind": "Range",
          "datatype": "Number",
          "precision": -1
        },
    {
          "kind": "Range",
          "datatype": "String",
          "precision": -1
        } 
   ]
  }
  ],
  "excludedPaths": 
[
 {
      "path": "/anotherpath"
 }
]
}
Question 61. Azure Cosmos Db As A Platform Seems To Have Lot Of Capabilities, Such As Sorting, Aggregates, Hierarchy, And Other
Functionality. Will You Be Adding These Capabilities To The Table Api?
Answer:
The Table API provides the same query functionality as Azure Table storage. Azure Cosmos DB also supports sorting, aggregates, geospatial query,
hierarchy, and a wide range of built-in functions. We will provide additional functionality in the Table API in a future service update. 
Question 62. When Should I Change Table Throughput For The Table Api?
Answer:
You should change Table Throughput when either of the following conditions applies:
o You're performing an extract, transform, and load (ETL) of data, or you want to upload a lot of data in short amount of time.
o You need more throughput from the container at the back end. For example, you see that the used throughput is more than
the provisioned throughput, and you are getting throttled. 
Question 63. Can I Scale Up Or Scale Down The Throughput Of My Table Api Table?
Answer:
Yes, you can use the Azure Cosmos DB portal’s scale pane to scale the throughput. 
Question 64. Is A Default Table Throughput Set For Newly Provisioned Tables?
Answer:
Yes, if you do not override the Table Throughput via app.config and do not use a pre-created container in Azure Cosmos DB, the service creates a
table with throughput of 400.
Question 65. Is There Any Change Of Pricing For Existing Customers Of The Azure Table Storage Service?
Answer:
None. There is no change in price for existing Azure Table storage customers.
Question 66. How Is The Price Calculated For The Table Api?
Answer:
The price depends on the allocated Table Throughput.
Question 67. How Do I Handle Any Throttling On The Tables In Table Api Offering?
Answer:
If the request rate exceeds the capacity of the provisioned throughput for the underlying container, you get an error, and the SDK retries the call by
applying the retry policy.
Question 68. Why Do I Need To Choose A Throughput Apart From Partition Key And Rowkey To Take Advantage Of The Table Api
Offering Of Azure Cosmos Db?
Answer:
Azure Cosmos DB sets a default throughput for your container if you do not provide one in the app.config file or via the portal.
Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operation. This guarantee is possible when the engine
can enforce governance on the tenant's operations. Setting Table Throughput ensures that you get the guaranteed throughput and latency, because
the platform reserves this capacity and guarantees operational success.
By using the throughput specification, you can elastically change it to benefit from the seasonality of your application, meet the throughput needs,
and save costs.
Question 69. Azure Table Storage Has Been Very Inexpensive For Me, Because I Pay Only To Store The Data, And I Rarely Query. The
Azure Cosmos Db Table Api Offering Seems To Be Charging Me Even Though I Have Not Performed A Single Transaction Or Stored
Anything. Can You Please Explain?
Answer:
Azure Cosmos DB is designed to be a globally distributed, SLA-based system with guarantees for availability, latency, and throughput. When you
reserve throughput in Azure Cosmos DB, it is guaranteed, unlike the throughput of other systems. Azure Cosmos DB provides additional capabilities
that customers have requested, such as secondary indexes and global distribution.
Question 70. I Never Get A “quota Full" Notification (indicating That A Partition Is Full) When I Ingest Data Into Azure Table Storage.
With The Table Api, I Do Get This Message. Is This Offering Limiting Me And Forcing Me To Change My Existing Application?
Answer:
Azure Cosmos DB is an SLA-based system that provides unlimited scale, with guarantees for latency, throughput, availability, and consistency. To
ensure guaranteed premium performance, make sure that your data size and index are manageable and scalable. The 10-GB limit on the number of
entities or items per partition key is to ensure that we provide great lookup and query performance. To ensure that your application scales well, even
for Azure Storage, we recommend that you not create a hot partition by storing all information in one partition and querying it.
Question 71. So Partition Key And Rowkey Are Still Required With The Table Api?
Answer:
Yes. Because the surface area of the Table API is similar to that of the Azure Table storage SDK, the partition key provides an efficient way to
distribute the data. The row key is unique within that partition. The row key needs to be present and can't be null as in the standard SDK. The length
of RowKey is 255 bytes and the length of Partition Key is 1 KB.
Question 72. What Are The Error Messages For The Table Api?
Answer:
Azure Table storage and Azure Cosmos DB Table API use the same SDKs so most of the errors will be the same.
Question 73. Why Do I Get Throttled When I Try To Create Lot Of Tables One After Another In The Table Api?
Answer:
Azure Cosmos DB is an SLA-based system that provides latency, throughput, availability, and consistency guarantees. Because it is a provisioned
system, it reserves resources to guarantee these requirements. The rapid rate of creation of tables is detected and throttled. We recommend that
you look at the rate of creation of tables and lower it to less than 5 per minute.Remember that the Table API is a provisioned system. The moment
you provision it, you will begin to pay for it.
Question 74. How Can I Apply The Functionality Of Graph Api (preview) To Azure Cosmos Db?
Answer:
You can use an extension library to apply the functionality of Graph API (Preview). This library is called Microsoft Azure Graphs, and it is available on
NuGet.
Question 75. It Looks Like You Support The Gremlin Graph Traversal Language. Do You Plan To Add More Forms Of Query?
Answer:
Yes, we plan to add other mechanisms for query in the future.
Question 76. How Can I Use The New Graph Api (preview) Offering?
Answer:
To get started, complete the Graph API quick-start article.
Question 77. Why Is Choosing A Throughput For A Table A Requirement?
Answer:
Azure Cosmos DB sets default throughput for your container based on where you create the table from - portal or CQL. Azure Cosmos DB provides
guarantees for performance and latency, with upper bounds on operation. This guarantee is possible when the engine can enforce governance on the
tenant's operations. Setting throughput ensures that you get the guaranteed throughput and latency, because the platform reserves this capacity
and guarantees operation success. You can elastically change throughput to benefit from the seasonality of your application and save costs.
Question 78. What Happens When Throughput Is Exceeded?
Answer:
Azure Cosmos DB provides guarantees for performance and latency, with upper bounds on operation. This guarantee is possible when the engine
can enforce governance on the tenant's operations. This is possible based on setting the throughput, which ensures that you get the guaranteed
throughput and latency, because platform reserves this capacity and guarantees operation success. When you exceed this capacity you get
overloaded error message indicating your capacity was exceeded. 0x1001 Overloaded: the request cannot be processed because "Request Rate is
large". At this juncture it is essential to see what operations and their volume causes this issue. You can get an idea about consumed capacity
exceeding the provisioned capacity with metrics on the portal. Then you need to ensure capacity is consumed nearly equally across all underlying
partitions. If you see most of the throughput is consumed by one partition, you have skew of workload.
Metrics are available that show you how throughput is used over hours, days, and per seven days, across partitions or in aggregate.
Question 79. Does The Primary Key Map To The Partition Key Concept Of Azure Cosmos Db?
Answer:
Yes, the partition key is used to place the entity in right location. In Azure Cosmos DB it is used to find right logical partition that is stored on a
physical partition. The partitioning concept is well explained in the Partition and scale in Azure Cosmos DB article. The essential take away here is
that a logical partition should not exceed the 10 GB limit today.
Question 80. What Happens When I Get A “quota Full" Notification Indicating That A Partition Is Full?
Answer:
Azure Cosmos DB is a SLA-based system that provides unlimited scale, with guarantees for latency, throughput, availability, and consistency. It's
Cassandra API too allows unlimited storage of data. This unlimited storage is based on horizontal scaleout of data using partitioning as the key
concept. The partitioning concept is well explained in the Partition and scale in Azure Cosmos DB article.
The 10-GB limit on the number of entities or items per logical partition you should adhere to. To ensure that your application scales well, we
recommend that you not create a hot partition by storing all information in one partition and querying it. This error can only come if you data is
skewed - that is you have lot of data for one partition key - i.e., more than 10 GB. You can find the distribution of data using the storage portal. Way
to fix this error is to recreate the table and choose a granular primary (partition key), which allows better distribution of data.
Question 81. Is It Possible To Use Cassandra Api As Key Value Store With Millions Or Billions Of Individual Partition Keys?
Answer:
Azure Cosmos DB can store unlimited data by scaling out the storage. This is independent of the throughput. Yes you can always just use Cassandra
API to store and retrieve key/values by specifying right primary/partition key. These individual keys get their own logical partition and sit atop
physical partition without issues.
Question 82. Is It Possible To Create Multiple Tables With Apache Cassandra Api Of Azure Cosmos Db?
Answer:
Yes, it is possible to Crete multiple tables with Apache Cassandra API. Each of those tables is treated as unit for throughput and storage.
Question 83. Is It Possible To Create Multiple Tables In Succession?
Answer:
Azure Cosmos DB is resource governed system for both data and control plane activities. Containers like collections, tables are runtime entities
which are provisioned for given throughput capacity. The creation of these containers in quick succession is not expected activity and throttled. If
you have tests which drop/create tables immediately - please try to space them out.
Question 84. Is It Possible To Bring In Lot Of Data After Starting From Normal Table?
Answer:
The storage capacity is automatically managed and increases as you push in more data. So you can confidently import as much data as you need
without managing and provisioning nodes, etc.
Question 85. Is It Possible To Supply Yaml File Settings To Configure Apache Casssandra Api Of Azure Cosmos Db Behavior?
Answer:
Apache Cassandra API of Azure Cosmos DB is a platform service. It provides protocol level compatibilty for executing operations. It hides away the
complexity of management, monitoring and configuration. As a developer/user you do not need to worry about availability, tombstones, key cache,
row cache, bloom filter and multitude of other settings. Azure Cosmos DB's Apache Cassandra API focuses on providing read and write performance
that you require without the overhead of configuration and management.
Question 86. Will Apache Cassandra Api For Azure Cosmos Db Support Node Addition/cluster Status/node Status Commands?
Answer:
Apache Cassandra API is a platform service which makes capacity planning, responding to the elasticity demands for throughput & storage a breeze.
With Azure Cosmos DB you provision throughput you need. Then you can scale it up and down any number of times through the day without
worrying about adding/deleting nodes or managing them. This implies you do not need to use the node, cluster management tool too.
Question 87. What Happens With Respect To Various Config Settings For Key Space Creation Like Simple/network?
Answer:
Azure Cosmos DB provides global distribution out of the box for availability and low latency reasons. You do not need to setup replicas etc. All writes
are always durably quorum committed in a any region where you write while providing performance guarantees.
Question 88. What Happens With Respect To Various Settings For Table Metadata Like Bloom Filter, Caching, Read Repair Change,
Gc_grace, Compression Memtable_flush_period Etc?
Answer:
Azure Cosmos DB provides performance for reads/writes and throughput without need for touching any of the configuration settings and accidently
manipulating them.
Question 89. Is Time-to-live (ttl) Supported For Cassandra Tables?
Answer:
Yes, TTL is supported.
Question 90. Is It Possible To Monitor Node Status, Replica Status, Gc, And Os Parameters Earlier With Various Tools? What Needs To
Be Monitored Now?
Answer:
Azure Cosmos DB is a platform service that helps you increase productivity and not worry about managing and monitoring infrastructure. You just
need to take care of throughput which is available on portal metrics to find if you are getting throttled and increase or decrease that throughput.
Monitor SLAs. Use Metrics Use Diagnostic logs.
Question 91. Is Composite Partition Key Supported?
Answer:
Yes, you can use regular syntax to create composite partition key.
Question 92. Can I Use Stable Loader For Data Loading?
Answer:
No, during preview sstable loader is not supported.
Question 93. Does Cassandra Api Provide Full Backups?
Answer:
Azure Cosmos DB provides two free full backups taken at four hours interval today across all APIs. This ensures you do not need to setup a backup
schedule etc. If you want to modify retention and frequency, send an email to [email protected] or raise a support case.
Information about backup capability is provided in the Automatic online backup and restore with Azure Cosmos DB article.
Question 94. How Does The Cassandra Api Account Handle Failover If A Region Goes Down?
Answer:
The Azure Cosmos DB Cassandra API borrows from the globally distributed platform of Azure Cosmos DB. To ensure that your application can
tolerate datacenter downtime, enable at least one more region for the account in the Azure Cosmos DB portal developing with multi-region Azure
Cosmos DB accounts. You can set the priority of the region by using the portal developing with multi-region Azure Cosmos DB accounts.
You can add as many regions as you want for the account and control where it can fail over to by providing a failover priority. To use the database,
you need to provide an application there too. When you do so, your customers will not experience downtime.
Question 95. Does The Apache Cassandra Api Index All Attributes Of An Entity By Default?
Answer:
Yes, all attributes of an entity are indexed by default by Azure Cosmos DB. 
Question 96. Azure Cosmos Db As A Platform Seems To Have Lot Of Capabilities, Such As Change Feed And Other Functionality. Will
These Capabilities Be Added To The Cassandra Api?
Answer:
The Apache Cassandra API provides the same CQL functionality as Apache Cassandra. We do plan to look into feasibility of supporting various
capabilities in future.
Question 97. Why Are You Moving To Azure Cosmos Db?
Answer:
o Azure Cosmos DB is the next big leap in globally distributed, at-scale cloud databases. As a DocumentDB customer, you now
have access to the breakthrough system and capabilities offered by Azure Cosmos DB.
o Azure Cosmos DB started as “Project Florence” in 2010 to address the pain points faced by developers in building large-scale
applications inside Microsoft. The challenges of building globally distributed apps are not unique to Microsoft, so we made the first
generation of this technology available in 2015 to Azure developers in the form of Azure DocumentDB.
o Since that time, we’ve added new features and introduced significant new capabilities. Azure Cosmos DB is the result. As a part
of this release, DocumentDB customers, with their data, automatically and seamlessly become Azure Cosmos DB customers. These
capabilities are in the areas of the core database engine, as well as global distribution, elastic scalability, and industry-leading,
comprehensive SLAs. Specifically, we have evolved the Azure Cosmos DB database engine to efficiently map all popular data
models, type systems, and APIs to the underlying data model of Azure Cosmos DB.
o The current developer-facing manifestation of this work is the new support for Gremlin and Table storage APIs. And this is just
the beginning. We plan to add other popular APIs and newer data models over time, with more advances in performance and
storage at global scale.
o It is important to point out that the DocumentDB SQL dialect has always been just one of the many APIs that the underlying
Azure Cosmos DB can support. For developers who use a fully managed service such as Azure Cosmos DB, the only interface to the
service is the APIs that are exposed by the service. Nothing really changes for existing DocumentDB customers. In Azure Cosmos
DB, you get exactly the same SQL API that DocumentDB offers. And now (and in the future), you can access other previously
inaccessible capabilities
o Another manifestation of our continued work is the extended foundation for global and elastic scalability of throughput and
storage. We have made several foundational enhancements to the global distribution subsystem. One of the many such developer-
facing features is the Consistent Prefix consistency model, which makes a total five well-defined consistency models. We will release
many more interesting capabilities as they mature.
Question 98. What Do I Need To Do To Ensure That My Documentdb Resources Continue To Run On Azure Cosmos Db?
Answer:
You don't need to make any changes all. Your DocumentDB resources are now Azure Cosmos DB resources, and there was no interruption in the
service when this move occurred.
Question 99. What Changes Do I Need To Make For My App To Work With Azure Cosmos Db?
Answer:
There are no changes to make. Classes, namespaces, and NuGet package names have not changed. As always, we recommend that you keep your
SDKs up to date to take advantage of the latest features and improvements.
Question 100. Are There Changes To Pricing?
Answer:
No, the cost of running your app on Azure Cosmos DB is the same as it was before.

You might also like