INSERT INTO tables with VALUES option as achieved with other SQL variants is not supported in Spark SQL as of now. For single record inserts the below example provides two options:
Use case: sometimes files are processed were nulls are represented with text like “NULL” meaning the field is not actually empty or null.
Below are some python functions and a test demonstrating functionality.
A dynamic pivot table means you do not need to define hard coded column names as a dynamic query will fetch the field values from a column and use them as the column names while pivoting the source table.
Good thing there are some code examples below you can just steal and alter as you need.
The first example will just return as a SELECT, the second example will write the results to a global temp table called ##Result.
A use case for this might be a continuous requirement to pivot a table however the column name requirements keep changing as field values change.
Example 1: Return as SELECT
Example 2: Write output to a table
The following is a code snippet that would create a table in a “sales” schema called customer.
If no reference to a schema is given the table will be created in the default Spark location.
So you have a PowerShell script and you just want to run it without messing around with permissions, policies, signing it or any other crap. (Yes yes I know all those things are vital for system wide security but you’re in a hurry damn it!)
Right click PowerShell and run as administrator.
When the terminal is open run the following line:
When prompted type the letter A and press Enter (if applicable).
Run the below including “&” at start of line with reference to your script, i.e.
Below is example PowerShell code to create a directory structure from file creation dates.
A use case for this code might be a need to organize a collection of files stored on an SFTP server or NAS. The code can be modified to loop through the files putting each one into sub directories corresponding to nested folders organized into Year\Month\Day folders.
To use the logic first create a test folder on your computer.
Then copy the logic below to a text file editor and find and replace the following with reference to your test folder location:
Then save the code below as a PowerShell file called “DirFromDate.ps1” into your test folder.
To then run the file open up PowerShell and paste the following line:
Hit return and then enter the letter A if requested.
Then enter the following replacing “YourTestDirectory” with your actual test directory location:
The script below will, providing the login you are using has adequate permissions, return the schema permissions a user has.
Commented out at the end of the script are examples of the types of permission you can assign, again providing the login you are using has adequate permissions.
Below is a script you can modify to create a Login for an Azure SQL Server instance along with Users from that Login within databases.
To use the script don’t execute it all at once.
Execute the script in stages:
Section 1 & 2 can be executed together.
Section 3 requires you to switch to the target database.
Section 4 relates to limiting data access to a particular role or schema.