Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Wednesday, October 8, 2014

How To: Export a SQL Server Database to Windows Azure

If you are beginning to work with Windows Azure and are ready to deploy an application or service, you may begin to wonder how to export that existing backend SQL Server database as well. 

The good news is it's quite trivial to do by using SQL Server Management Studio and the Windows Azure Management Portal. For this example I'm going to use local my 'BowlingStats' SQL Database that is used with my BowlingSPA application to export to Azure.

Prerequisites
  • Obtain a Windows Azure Account

1. Create a Storage Account

Once logged onto the Azure Management Portal, select 'Storage' from the options on the left, and then from 'Data Services' -> 'Storage,' select the 'Quick Create' option. Enter a URL for the name of your storage account as well as a location and replication strategy. Normally Azure will pre-populate with the best default options but you can change them if desired.



You will see a message once the account has been successfully created:



Upon creating the account you will be presented with a screen containing a 'Primary' and 'Secondary' set of access keys for the Storage Account you just created. Store these keys as they will be needed to connect to Azure Storage later from SQL Server.



Don't worry if you quickly dismissed the dialog with the keys. You can always get back to them by selecting 'Manage Access Keys' at the bottom of the 'Storage' Azure option. You can also regenerate the keys if they have been compromised.




2. Export your SQL Database as a .bacpac file directly to Azure

Now that we have a storage account, we need to hop over to SQL Server Management Studio 2012 and export our database as a .bacpac file to Azure.

Right-click the database and select, 'Tasks' -> 'Export Data-tier Application...'




After selecting 'Next' from the Introduction screen, select the 'Save to Windows Azure' option and then press 'Connect'. Here you can enter the name of your Azure Storage Account created in step 1, as well as the 'Primary' key value provided when the Storage Account was created. Press the 'Connect' button once the information has been entered.



After the connect dialog has been dismissed add a 'Container' name that will contain the .bacpac file on Azure. Once the information is all correct, press the 'Next' button and being the export of data from your SQL Server database directly to Windows Azure! 



Once successfully exported, the bacpac file will be available for import back on the Azure Management Portal in storage.




3. Import your uploaded bacpac to a SQL Database on Azure

Finally, let's go back to the Windows Azure Management Portal and import the SQL Database bacpac file. Select 'SQL Databases' from the options on the left, and then from 'Data Services' -> 'SQL Database,' select the 'Import' option.



The next dialog will allow you to choose the database settings. To select the bacpac we already uploaded, press the folder icon to browse to the available data files in storage.



Expand your storage account and you should see the container name set when the .bacpac file was exported from SQL Server. Select the file and press 'Open' to automatically populate the bacpac URL.



Enter a name for the database, select your subscription, service tier, (do not select 'Web' or 'Business' from the screenshot as they will eventually be retired), performance, max size, and server. If you have previously created a SQL Server instance, you can choose it and provide the login information. If this is the 1st time, select  'New SQL database server' and press the next button. 



You will now need to create the SQL Server instance login information. Note that while sometimes Azure appears to be smart and pre-populate options (i.e. Region') with the one closest to you, I did not find this to be the case with this dialog. I believe 'East Asia' was pre-selected and provided me a warning that the storage account and database were not in the same region. Make sure to switch it to the same value as your storage account and the warning will be dismissed. Once you enter your login credentials, press the accept button to complete the process.



That's it! Your new SQL Database and instance once provisioned will display under 'SQL Databases' in Windows Azure for all of your cloud application and service needs.



Sunday, February 19, 2012

Should I Place My Business Logic In Code Or In Stored Procedures?

I see this age old question asked on forums, LinkedIn conversations, interviews, and in general developer discussions. To tell you the truth it has been discussed 1000 times before and I am going to add one more to the pile here.

This is a great debate/conversation, and I have been a part of it several times myself as well. I can honestly say there is no 'right' or 'wrong' in my opinion (OK maybe there is but let me be a little PC here as to not to offend those DB folks right from the start), but only 'pros' and 'cons' to each method. I think both have their advantages and disadvantages, and no option really screams out as the obvious choice on the surface unless there is say some significant performance advantage one way or the other. However I have my bias on 1 side of the argument as I will detail below.

In my experience this is usually who goes on each side of the debate: people with a stronger or more prominent SQL/Database background will opt for placing all the logic in the database where as true software engineers will tend to place those rules within the application and apply the appropriate architecture, design patterns, and frameworks to organize and describe them.

At the highest level Database folks will argue to place that logic in stored procedures or the database because then you can "make changes on the fly" without the hassle of "code recompile and pushing out new releases", where developers that want that logic in code will argue TSQL is such a limited language to debug and express complex business rules that can be evaluated so much better in code.

Well as far as I am concerned that is about as far as that side's (Database crowd) argument goes for me. While their argument is valid, recompiling and redeploying an app because of rule changes is typically not an impossible task. In fact with today's modern deployment options regardless of technology being used this argument is not as strong as it used to be.

In fact the whole idea of "making changes without redeployment" or "changes on the fly" is a bit skewed to begin with. Code of any type should never be changed in a production environment without being tested. Managed code is more likely to be tested using unit tests or at a minimum running the application through some end-user tests before deployment. The temptation to change a stored procedure without testing it fully is ever present, and without a stringent environment with DBAs locking down the database, this option could be exercised all too easily and is a bad idea.

As a developer I tend to try and place these rules (business rules, calculations, math, etc.) in .NET managed code. TSQL is not the easiest language to explain and implement complex business rules and is also more difficult to debug and test. My philosophy is typically to get the raw data back to the code and then manipulate it from there. You have so much more power with the .NET framework and managed code to explain the rules in code than with TSQL. While it is possible with TSQL, have you ever seen one of these 500 line stored procs with a gazillion rules that some database person was flexing their muscle at? Yuck! We are in the business of writing well formed code using OO principals and this is just not possible using TSQL and database languages.

Which side do I fall on? That's easy. I am a developer and engineer at heart and I believe in using the right design and architecture, and that those business rules (or a majority of them) should reside within the application. What's the database for? Getting data and executing some logic, but the brain of solving the business problem for me is not going to be within a stored procedure, view, or user defined function. The only time I flex a tad on this is with 1 off custom queries for say reporting where the Object Model does not currently support the data needed to create the output. In this case I might extend the logic a bit on the database side. If I can extend my Object Model to support the reporting without an extreme amount of work then by all means I will do it. However, when at all possible I keep any business rules or logic within the application.

So the answer is not always so straight forward, but hopefully this post help you think about where this logic should reside (......in the code. Wait, did I just say that!?!?). Now go ask this question on a SQL forum or blog and see what response you get. It will probably be along the lines of... "Put it all in Stored Procedures!!" Have fun coding :P