## Thursday, April 6, 2017

### SuperBulkCopy (Major) Refresh

SuperBulkCopy (and the command-line sbcp) is one of our newest (free) which comes to replace the traditional usage of both 'BULK INSERT' and 'bcp' with something much more powerful, yet simple.

If you've ever worked with either, trying to turn text-based files into a SQL Table, you are probably quite aware of the various limitations. SuperBulkCopy was written to overcome those limitations and provide a quick working solution.

OK, so that was the quick refresh. Now, there's a new version which just came out, and in addition to the existing abilities, it introduces the following:

#### - Gzip support! You can now bulk import directly from a .gz file

This is a big one -> often the input files are compressed and traditional import tools usually require a preliminary step of uncompromising everything to a different folder.
SuperBulkCopy will detect those compressed files and will extract the stream in-memory, quickly importing the uncompressed data.

#### - Wildcard support! Now supporting wildcard* input files

Have multiple files in a folder which all needs to be imported? No problem -> the wildcard feature will take care of achieving it all in a single run. Whether to the same table or different tables.
If you wish to use different table names, see the item below!

#### - Output table pattern support!

This feature helps you customize the output table name. Based on input file names, dates, unique id's and more.
The supported patterns are:
%ifn -> Input Filename
%timestamp -> Timestamp (yyyy-mm-dd-hh-mm-ss)
%date -> yyyy-mm-dd
For example,  if  Output Table Name is "my_table_%ifn", and input file name is "my_file.csv" then the table will be named "my_table_my_file_csv"

#### - Added a new custom field generation -> adding input filename as an additional field to the output table/file

This new feature allows you to add custom fields into the destination table/s.
The current supported fields are:
- Input Filename
- Input Directory

More cool things to come -- in the meantime, enjoy this new version!

## Wednesday, December 28, 2016

### Database Space Explorer

As part of our app renaming/re-branding - Database File Explorer is now known as Database Space Explorer. The main reason is that this product, which started being very focused on the file/filegroup usage is expanding to many other database-space related stats.

With that being said, we encourage you to try out the latest version of the app - there are many new features, fixes and enhancements. For example - A "Table Popularity" view which lists table by their connection and references to/by other objects. This provides another easy way to explore large databases.

Here's a screenshot:

## Wednesday, July 13, 2016

### SqlTableZip is now known as - "SQL Server Query Zipper"

I think names are important. And product names specifically should indicate something about the actual product.

With that in mind, "SQL Server Query Zipper" (or SQL Query Zipper in short) does a better job on explaining what the tool does - even before reading the description.

There's a new version - which in addition to other enhancements, also goes by the new name.

TA-DA!

The console version was also renamed and the executable is now called: SSQZConsole

## Tuesday, May 24, 2016

### SuperBulkCopy (SBCP) Walk-through

Recently, We've introduced a free new tool called SuperBulkCopy (short name: sbcp)

What it does is simple to explain, and yet pretty challenging to implement - it does everything we wanted BULK INSERT, BCP, or any existing flat file importer to do: Get the job done, as automatic as possible.

In this post I'll provide a walk-through, with many images. Video will also become available in the future but hopefully this segment can get you up to speed as to: "Why do I REALLY need this!"

Again, as mentioned above, this tool is absolutely free. So just get it and start playing!

Let's begin.

### Part 1: The struggle

Here's the setup - I got a fairly big file from a client, and need to turn it into a SQL Table for further analysis.

I look at the file:

Nope. The same thing.

OK, next step -- I do know (and we'll see this in a moment) that this file has mixed data types, and everything is wrapped with "quotes". Also - it has headers, which eliminates the ability for either BULK INSERT or BCP to handle it correctly.
So we'll do the next natural thing and jump right into the "Text Import Wizard", or use SSIS directly. (Same engine anyway)

Loaded the file, with the correct delimiters, as well as specified the Text Qualifier - A quick preview indicates we're almost good to go:

Mapping the data into a new table, after also using the "Suggest Data Types" feature

But - OH NO! The process has failed.
This can be caused by so many reasons, from data truncation, into type mismatch, headers not 100% matching the field structure and more.

Looking at the friendly logs:

Fun Times!

Now, I'm not saying that everything fails, and even the ones that do are in some cases solvable after a few iterations of trial & error.

But the question is - WHY BOTHER? Where's there is a free tool today that will do all the hard work for you?

### Part 2: The Easy Way Out

Let's try to do that same thing with SuperBulkCopy.
It's very fast and easy to use, so I'll take the time showing a few different methods of getting the job done.

Let's go! (again)

Okay, after installing the tool, let's run it. I'm using the DoccoLabs launcher but you might as well simply run it directly from the start menu.

On Input Settings, I choose that same file.

Now, even before going further - there's a quick "Preview" option, regardless of file size - it will show you a sample

Yep, Looks great!
Now, there's not a lot to do: Simply defining the Destination SQL Server connection. I'll even ignore the fine-tuning options and simply let it run.
(Note: for the demo I've only imported part of the file, which is another feature, but one can easily import the entire content)

So - completed. Let's go check the table in SSMS:

It's right there. Looking at the table definitions even shows that the best data-types were used:

Basically, we're done.
But I want to spend another minute of showing you how SuperBulkCopy can run just as easily as a console-app from command line.

After setting all the relevant input-output configuration for this demo, we can save the configuration for later use.

In our case, the configuration looks like this:

<?xml version="1.0" encoding="UTF-8" ?>
<config>
<input>
<Filename>c:\temp\text_samples\big_file_with_quoted_identifiers_20160301.tsv</Filename>
<AutoGuessDelimiters>True</AutoGuessDelimiters>
</input>
<output>
<TableName>test1</TableName>
<OptimizeDataTypes>True</OptimizeDataTypes>
<ConnectionString>Data Source=127.0.0.1;Initial Catalog=tempdb;Integrated Security=True;</ConnectionString>
<DropTableIfExists>True</DropTableIfExists>
</output>
<AllowAsyncDataWrite>True</AllowAsyncDataWrite>
</config>

And I can save it using the main menu.

Now - I can start using sbcp.exe
Of course, instead of generating the XML file, I could write it myself. Typing sbcp.exe /? provides all the essential information of how to build a configuration file.
Alternatively, everything *can* be done with simple command-line arguments but I find XML option to be more clear and easy to understand.

So, now let's open a command window and direct sbcp.exe to the configuration file:

And here are the results:

Basically - the exact same result as the UI version produces. Only here - the process can be fully automated. (for example, if such file arrives every day and needs to be turned into a table)

Furthermore - this can run directly from SSMS by using the SQLCMD mode. Here's how

Switch into SQLCMD Mode on the SSMS main menu:

Now, run the same as you just did from the command line. don't forget to add the !! indicator.

### Part 3: Conclusions

In short, SuperBulkCopy can easily get into flat-file input into your SQL Server. It does a lot such as detecting headers, delimiters, data-types and more.
It's a free tool that can potentially turn this annoying task into a rather easy one.

You can get it from this line: http://www.doccolabs.com/products/superbulkcopy

---

Have anything to add? Let us know! (comments, twitter mentions, emails or whatever channel you like best)

## Thursday, May 19, 2016

### SQL Stripes is now FREE (Going EOL)

It feels a bit awkward to say it, but SQL Stripes - the first product that ever came out of DoccoLabs, is reaching its End-Of-Life.

As SQL Stripes is getting old, and after a lot of considerations I've decided to not write a new "similar" product mostly due to the ROI given a growing number of other monitoring tools out there.

SQL Stripes, like all of the other products was written with love and passion to making the SQL Server users' life easier.
It was written out of need, and served many customers. (And still does to this day)

For existing users who get what they need from the product - feel free to working with this but keep in mind that there will be no more future releases, fixes or any sort of support around it.

In addition, We're making SQL Stripes free! If you would like to keep working with it after the trial is over (or even before it expires), send me an email and I'll send you an activation key.

## Monday, February 22, 2016

### Introduction to SQL Table Zip *Console*

The latest version of our Query-Level Export/Backup App, "SQL Table Zip", has a brand new feature - A stand-alone console version, allowing you to Import & Export any query from your command line / powershell / directly from SSMS.

Here is a simple end-to-end walkthrough with the console version:

To make it easy to access, the setup process, by default, has a new checkbox for adding the installation folder to your environment variables path. This means that you can type SqlTableZipConsole (or SqlTableZip) from anywhere in your command line. An indication that it works would be to simply open a 'cmd' window and type in SqlTableZipConsole like in this screenshot:

From this point on, we'll be using SSMS for the demo, although it can be done anywhere.

The basic argument that SqlTableZipConsole takes is /ConfigFile:\path\to\MyConfigFile.xml

Here's a sample BACKUP config file, notice the <action> tag and the following properties:

\temp\demo\backup_config_demo.xml
<?xml version="1.0" encoding="utf-8" ?>
<config>
<Action>Backup</Action>
<Query>SELECT * FROM Production.WorkOrder</Query>
<BackupFile>c:\temp\ProdWorkOrder.sqz</BackupFile>
</config>


Next, let's see how we run everything inside SQL Server Management Studio (SSMS)

First thing - switch SSMS to SQLCMD mode:

Next, let's run a simple backup command based on the configuration file above. In SSMS the syntax will look like this:

Let's explain the results: The query result is now stored in ProdWorkOrder.sqz (SqlTableZip format) and has been compressed from the original ~16mb to ~0.5mb! That's a lot.

Now let's see how we restore this file back to a table. For the demo we'll restore the file to a different database under a different table name.

Here's the Restore's configuration file:

\temp\demo\restore_config_demo.xml
<?xml version="1.0" encoding="utf-8" ?>
<config>
<Action>Restore</Action>
<RestoreFile>c:\temp\ProdWorkOrder.sqz</RestoreFile>
<RestoreTableName>MyNewTable</RestoreTableName>
<RestoreConnectionString>Data Source=127.0.0.1;Initial Catalog=Sandbox;Trusted_Connection=True</RestoreConnectionString>
</config>


Note the different table name provided <RestoreTableName> as well as the different database name in the <RestoreConnectionString> tag.

Now, we'll point SqlTableZipConsole to this configuration file and run:

That's the entire end-to-end operation in a nutshell! Here's a simple select from the new created table:

#### One final note:

While SqlTableZip is not a free software, the restore operation is. This means that even with no purchased license, restoring a previously backed-up files is always possible.

## Wednesday, November 18, 2015

### | Part 3: Use the force, Luke!

Choosing the right architecture --

Experience with a wide variety of data stores doesn’t only help with choosing the right architecture, but also being confident about vetoing bad designs.
This goes both ways; so for the following (intentionally over-exaggerated) scenarios -

Scenario 1:
Requirement:
A new service which keeps track of your company’s orders.
Data specs: there are supposed to be about 100 orders per day, the information must be instantly consistent. Orders relate to products and customers.
Occasionally, some ad-hoc reports are needed, mostly to aggregate orders per customers, products or date range.
Selected solution:
the development team decided to save the orders to Redis, then use a queue to transform the data to an HDFS store,
out of which some map-reduce jobs will be used to return the relevant data.

So yes, the dev team got their hands all over a new trending technologies, and they can add “NoSQL” and “BIG”-data, but in fact, this is a very inappropriate solution for the problem.

Here’s another example -

Scenario 2:
Requirement:
Track the company app’s site visitor interaction in real-time on a heat map. In addition, show geo-locations trend over time.
Data specs: the app is extremely popular, gaining ~2500 interaction per second
Purposed solution:
Use a relational database on a very strong, high-end machine to able to support the 2500/sec OLTP rates
Create additional jobs running every few minutes, in order to aggregate the time frames & performing IP lookups.

Again, although this might work, this is again not a preferred solution.
In this case a real scale-out architecture will perform better at a lower cost.
A typical ELK stack for example could handle this scenario more naturally.

Can you see where this is going? Knowledge is important to divert the architecture in the right way.
Different specs require different platform implementation and in many cases - a mix.

Inappropriate platform selection can come from many sources, including:

• Developers who want to experience with different technologies
• PMs/POs lacking the detailed knowledge
• CTOs wishing to brag about using some new technology (preferably while playing golf with other CTOs)
Choosing the right platform for a given solution is extremely important, even when the recommended technology is not the trending one.

| Epilogue

The world of data is constantly changing. The variety of solutions for storing & retrieving data in various shapes and sizes is rapidly growing.
Relational databases are strong, and will stay strong - for storing and retrieving relational data, usually when no scale-out is required. For all the other types - the world has already changed.
If there’s one thing you should take away with you, is that looking forward - you should embrace these new technologies, get to know them and understand the architectural position of the dominant ones.

Links to Part 1, Part 2