Admit: either you love it or you hate it. And I must confess I was part of the last group, but now the wizard have convinced me to love it despite some caveats. So what did me switch sides? First lets look into the working of the wizard and how we can use it in a Continues Integration environment.
First we all know the wizard as it is installed together with SQL Server and probably started it to see what the capabilities are. But by default a wizard interface is not something we can use directly in a CI environment. But luckily the wizard can be started from a command prompt with some parameters to suppress the GUI. See here for the documentation. Basically the wizard accepts three different parameters:
But despite all the settings and documentation, it is not clear what all the capabilities are of the wizard. So mostly use the option to create an XMLA file and during the deployment to the server(s) data connection settings are replaces with e.g. PowerShell. I have deployed a lot of cubes/models this way.
But my current assignment at a bank, it is necessary to deploy and run everything via the principle of least privilege. And in my case I have to use a dedicated user for the database connection and also deploying via a dedicated (different) user via a CI process. So I started looking and the real capabilities of the deployment wizard.
First thing I noticed is with a build in Visual Studio three files are created for a Tabular model, Model.asdatabase, Model.deploymentoptions and Model.deploymenttargets. The first file contains the structure of the database, the second one the different deployment options, like if the database will be processed after deployment and what do we do with existing roles (merge or overwrite). The third file contains a connection string to the deployment server and the name of the database and model/cube name.
If you build a standard multidimensional solution, you will get four files. The three mentioned above and a ‘.configsettings’ file. And in this file we can define a different connection string for the data connection in the database. So this is our friend because despite which server/database is used during development, it is possible to overwrite it with a different one stored in the** ‘.configsettings’** file. And because a Tabular model is based on the same model (BISM) as a Multidimensional cube, we can use the ‘.configsettings’ file is both scenario’s.
But the wizard is capable of consuming two more files: a ‘.assecurityinformation’ file and a ‘.asassemblylocations’ file. Both files are optional and are only used if they are available with the same name of the ‘.asdatabase’ file. And the first contains encrypted security settings regarding database connections and the second one contains assembly information on where to store the embedded assemblies. I must confess that this is the part I didn’t investigate as I have never used custom assemblies as part of a database in my life.
So if we provide the wizard a ‘.asdatabase’ file name as input, the wizard is loading the following files if available: ‘.deploymentoptions’, ‘.deploymenttargets’, ‘.configsettings’, ‘assecurityinformation’ and ‘.asassemblylocations’. These files can provide all information needed for a fully automatically deployment scenario.
As shown above the wizard is capable of deploying a solutions fully automatically by providing it with the correct information in a set of files for a given environment. But ideally I like deploying my solutions via the principle of build once and deploy many. So I will build my solution once and deploy the build output to the different DTAP environments using different parameters.
I we look at deploying SQL Server databases for instance: create one ‘.dacpac’ file and deploy it via ‘sqlpackage.exe’ with a different profile file for each environment. But regarding the deployment wizard, I can store different environment settings in a ‘.configsettings’ file, but I cannot provide a specific one as a parameter at runtime.
This gave my the idea to wrap the wizard and accept a** ‘.configsettings’** file as a parameter. Via ILSpy it is possible to decompile the wizard code and luckily the code is not obfuscated and only the methods of the wizard self are not exposed. So we can take a look at how the wizard works and wrap the official methods with a custom application.
With this idea I created an application capable of delivering the needs I had: AnalysisServices.Deployment. And as an extra option I added the capability to create a ‘.assecurityinformation’ file to store a username and password for a model datasource connection. This file is encrypted using information from the current user and current machine. So it is not possible to reuse this information file easily. I have another idea to create a custom security file which encrypts this information using a master key, so it can be exchanged easily.
The solution uses the same files and same logic as the official wizard. The syntax is at this moment:
/dDeployment mode
/m:[filename]Model definition
/t:[filename]Deployment target
/o:[filename]Deployment options
/c:[filename]Deployment config
/s:[filename]Deployment security
/a:[filename]Deployment assembly
/iImpersonation mode
/ds:[ID]Datasource ID
/u:[username]Impersonation username
/p:[password]Impersonation password
/f:[filename]Export filename
/?, /hThis help
There are two modes: deployment and impersonation. With the /d switch the deployment mode is triggered. And the /m switch is required. If no other switch are provided the application will look for the remaining files sharing the same basepart. If a file is provided with the correct switch, the application will use that file. So this will provide the option to deploy the same file to a different server with a different datasource connection by simple provide different parameters. At this moment the ‘.asassemblylocations’ file is not yet supported.
For the impersonation mode the** /i** switch will create a file using the provided ID together with the username and password. This file is at this moment only capable of storing an username and password for one datasource connection.
At this moment I only tested it with SQL Server 2014, and I will test it with SQL Server 2012 and if needed change the logic. Other enhancements are in random order:
-JP