How to Secure Your Azure SQL Database

  1. Authentication and Authorization:
    • Use Azure Active Directory (Azure AD) authentication for better security.
    • Implement firewall rules to control access to your database.
    • Assign minimal permissions to users based on their roles (principle of least privilege).
  2. Encryption:
    • Enable Transparent Data Encryption (TDE) to protect data at rest.
    • Use Always Encrypted to secure sensitive data in transit.
    • Consider client-side encryption for additional protection.
  3. Auditing and Monitoring:
    • Enable Azure SQL Auditing to track database activity.
    • Set up Azure Monitor to receive alerts and insights.
    • Regularly review logs and audit trails.
  4. Network Security:
    • Isolate your database using Virtual Network Service Endpoints.
    • Restrict public access and use private endpoints.
  5. Patch Management:
    • Keep your database engine up to date with the latest security patches.
    • Regularly review vulnerability assessments.
  6. Backup and Recovery:
    • Implement automated backups and test recovery procedures (remember a backup is only theoretically there unless it has been tested and proven to work).
    • Store backups conforming to the 3-2-1 Backup Rule explained below (do not assume your backups are safe just because they are in the cloud).

The 3-2-1 Backup Rule: Ensuring Data Resilience

The 3-2-1 Rule is a robust strategy that emphasizes redundancy, resilience, and data availability. Here’s what it entails:

  1. Three Copies of Your Data:
    • Maintain the original data and create at least two additional copies.
  2. Two Different Types of Media for Storage:
    • Store your data on distinct forms of media (e.g., hard drives, tapes) to enhance redundancy.
  3. At Least One Copy Off-Site:
    • Safeguard one backup copy in an off-site location, separate from your primary data and on-site backups.

By adhering to this rule, you mitigate single points of failure, protect against corruption, and ensure data safety even in unexpected events or disasters

How to get PICO-8 games to run on Retroid Pocket using Retroarch

One of the best ways to play PICO-8 games on a Retroid Pocket is via the Retroarch core retro8.

However you may run into the problem of only PICO-8 cart images being displayed rather than Retroarch launching the actual game.

You can solve this problem by doing the following:

Retroarch Settings > User Interface > File Browser > Use Built-In Image Viewer and set it on “No”. This way the image viewer will not get in the way, misinterpreting a PICO-8 cart as just a png file.

Happy Gaming!

How to add Android as a separate platform in Daijisho

Copy the text below and save it as Android.json

{
    "databaseVersion": 8,
    "platform": {
        "name": "Android",
        "uniqueId": "android",
        "shortname": "android",
        "description": null,
        "acceptedFilenameRegex": "^.*$",
        "scraperSourceList": [
            "RAW:Android"
        ],
        "boxArtAspectRatioId": 0,
        "useCustomBoxArtAspectRatio": false,
        "customBoxArtAspectRatio": null,
        "screenAspectRatioId": 0,
        "useCustomScreenAspectRatio": false,
        "customScreenAspectRatio": null,
        "retroAchievementsAlias": null,
        "extra": ""
    },
    "playerList": [
        {
            "name": "android - activity component player",
            "description": "Android activity component player",
            "acceptedFilenameRegex": "^$",
            "amStartArguments": "-n {android.activity}\n",
            "killPackageProcesses": false,
            "killPackageProcessesWarning": true,
            "extra": ""
        }
    ]
}

Open Daijishou > Settings > (Under All settings) Library > Import Platform > Select the Android.json file.

Now go to the Android Platform > Path > Sync

Note: It is not an official platform and you can flag whether an app is a game or not if you go to daijisho apps section and then long press on an app and mark it as a game/not a game. It will show up in this android platform after syncing. By default the emulators themselves will likely be wrongly flagged as games.

A generic python based ETL pipeline solution for Databricks

Below is the code necessary to create a Databricks notebook source file that can be imported into Databricks. This file can act as a template for creating ETL logic to build tables in Databricks. Once the notebook is prepared it can be set to run by a Databricks workflow job.

The template is parameterized. This means the developer just needs to provide the destination database, the destination schema, the destination table and the SQL logic.

(Note: this simple example is a full load solution and not a incremental load solution. An incremental load solution can be achieve by writing sufficiently robust SQL that is use case specific.)

The SQL is provided as a variable and the table or table names are stored in a list allowing for a large degree of flexibility for creating a single pipeline that builds multiple database objects.

Another important feature of the code is that it compensates for the fact that Databricks does not have a native acknowledgement of primary keys or restrictions on their violations. A list of primary keys can be provided and if any of those keys are null or not distinct the code will throw an error.

The code will also assign metadata fields to each record created including the job run id as the ETL id, the created date and the updated date.

# Databricks notebook source
# MAGIC %md
# MAGIC https://tidbytez.com/<br />
# MAGIC This is an ETL notebook.<br />

# COMMAND ----------

# Libraries
import os
from pyspark.sql import functions as F
from pyspark.sql.types import StringType
from pyspark.sql.functions import lit, concat_ws, isnan, when, count, col
from datetime import datetime, timedelta

# COMMAND ----------

# Functions

# Generate ETL ID
def get_etl_id():
    try:
        run_id = (
            dbutils.notebook.entry_point.getDbutils()
            .notebook()
            .getContext()
            .currentRunId()
            .toString()
        )
        if run_id.isdigit():
            etl_id = bigint(run_id)
            return etl_id
        else
            etl_id = bigint(1)
            return etl_id
    except:
        print("Could not return an etl_id number")


# Build database object
def build_object(dest_db_name, schema_name, table_name, pk_fields, sql_query):

    # Destination Database and table
    table_location = dest_db_name + "." + table_name
    # External table file location
    file_location = "/mnt/" + schema_name + "/" + table_name

    # Create Dataframe
    df = sql_query

    # Count nulls in Primary Key
    cnt_pk_nulls = df.select(
        [count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in pk_fields]
    ).collect()[0][0]
    # Dataframe record count
    cnt_rows = df.count()
    # Primary Key distinct count
    cnt_dist = df.dropDuplicates(pk_fields).count()
    # Error message
    message = ""

    # Join metadata to dataframe
    global meta_df
    meta = meta_df
    df = df.withColumn("key", lit(1))

    # inner join on two dataframes
    df = df.join(meta, df.key == meta.key, "inner").drop(df.key).drop(meta.key)

    # Write dataframe to table
    if cnt_pk_nulls == 0:
        if cnt_rows == cnt_dist:
            df.write.mode("overwrite").format("delta").option(
                "mergeSchema", "false"
            ).option("path", file_location).saveAsTable(table_location)
        else:
            message = "Primary Key is not unique"
    else:
        message = "Primary Key contains nulls"

    if message != "":
        raise Exception(message)


# COMMAND ----------

# Variables

# Destinations

# File location
schema_name = "YOUR_SCHEMA_NAME_HERE"

# Database location
dest_db_name = "YOUR_DEST_DATABASE_NAME_HERE"

# PK fields
pk_fields = ["EXAMPLE_ID", "EXAMPLE_LOCATION"]

# Metadata
etl_id = get_etl_id()
t = datetime.utcnow()

# Create metadata dataFrame
data = [(1, etl_id, t, t)]
columns = ["key", "ETL_ID", "CREATED_DATE", "UPDATED_DATE"]
meta_df = spark.createDataFrame(data, columns)
meta_df = meta_df.withColumn("ETL_ID", meta_df["ETL_ID"].cast("int"))

# COMMAND ----------

# Table name variable list
table_list = [
    {"table_name": "EXAMPLE_TABLE"}
]

# COMMAND ----------

# Iterate through table variables
for i in range(len(table_list)):

    table_name = table_list[i].get("table_name")

    # SQL query
    sql_query = spark.sql(
        f"""
        SELECT 1 AS EXAMPLE_ID,
        'TEXAS' AS "EXAMPLE_LOCATION"
        """
    )
    build_object(dest_db_name, schema_name, table_name, pk_fields, sql_query)

How to fix Dolphin GameCube controller button mappings and keep them from being overwritten by RetroBat or Emulation Station

RetroBat and Emulation Station do a great job of mapping controller buttons straight out of the box but sometimes these settings do not map correctly onto specific emulators.

Dolphin’s GameCube is one such emulator that seems to get the buttons jumbled.

If you have ever tried to fix the button mappings via Dolphin directly you might have been frustrated that your manual settings have not stuck as next time you run a game you are back to the same wrong button layout.

This is because RetroBat or Emulation Stations front-end settings take precedence over the individual emulator settings, i.e. the expectation is you will set the emulator settings via these front-ends not in the emulators individually. Mostly this works great however some of the more detailed settings cannot be set via the front-ends and the front-ends overwrite the emulators with incorrect default settings.

To solve the GameCube button mapping problem do the following:

Open Dolphin directly, there are various means to achieve this one being via the RetroBat settings.

Click on the Controller icon.

For each port (controller you have connected) click on the “Configure” icon.

In the “Device” drop down select your controller.

In the GameCube Controller options set the buttons as follows:

For a PlayStation controller:

A : Cross

B : Square

X : Circle

Y : Triangle

Z : R1

Start: Start (Options)

For a XBox controller:

A : A

B : X

X : B

Y : Y

Z : Right Bumper

Start: Start

Now save the settings as a profile.

Reopen RetroBat/Emulation Station and press Start on your controller.

From the Main Menu:

Game Settings > Per System Advanced Configuration > GameCube > Autoconfigure Controllers = “OFF”

This should resolve the problem going forward.

Fire Toolbox Vs the Amazon HD 8 Tablet (2022)

Amazon’s Fire tablets may not provide a complete Android experience, but they can still satisfy most user’s basic needs. They are priced very affordably, but this comes at the cost of locked bootloaders, no Google Play certification, and a heavily customized skin (Amazon’s Fire OS) that is often based on older Android versions and is, let’s face it, horrible to use. However, the aftermarket development community has managed to overcome most of these limitations with the Fire Toolbox.

See link below:

https://www.xda-developers.com/amazon-fire-toolbox-helps-install-google-apps-change-launchers-and-more-on-amazon-fire-tablets/

However the typical means of de-amazonification is not fully functional on the HD 8 Tablet.

Some important features still work like:

Remove lock screen ads

Block OTA updates

Debloating Apps

However setting a third-party launcher app as the default launcher has become buggy. The most popular choice, Apex, will likely not work. The launcher Lawnchair seems to work without and problems though. It’s a very basic launcher but totally serviceable.

Most importantly thought, as of now July 2023, the feature to install the Google Play store and Google Services does not work.

This means apps like YouTube, even if you side load them, will not work. Perhaps a suitable workaround for this, if you are concerned about your data privacy, is to install Firefox open YouTube as a webpage and then go to settings and choose “Install”. This will create a shortcut on your Home Screen that looks and functions like an app.

To install apps without the Play store you can install alternatives like the F-Droid store, Aurora store (only use a disposable Gmail account) or use APKMirrror. (Use all suggested alternatives at your own risk)

All of the above is less than ideal though so if you were thinking about picking up a cheap Fire Tablet on Prime day with the expectation it can be totally de-amazoned with a few clicks of a mouse, maybe give it a miss and pay a few extra bucks for something else instead.

How to replace multiple words within a string at once using python

Below is a quick code snippet example you can reuse to replace multiple words within a string using python.

s = "The quick brown fox jumps over the lazy dog"
print(s)
for r in (("brown", "red"), ("lazy", "quick")):
    s = s.replace(*r)
print(s)

PlayStation 1 not showing up as an option in “Consoles” Tab of GarlicOS

If you have populated your RG35XX PS folder with games yet GarlicOS has not presented PlayStation as a console option this is likely due to GarlicOS not having the functionality to read sub folders and that your games each have dedicated folders. For GarlicOS to see your games all your games must be directly in the console folder.

However typically PS games are in .bin format and are saved in folders because even single disk games will have at least two associated files i.e. the .bin files and the .cue file. For multi disk games, where there is a .bin file and a .cue file for each disk, and potentially a .m3u file to handle multi disk operation, the problem is exacerbated.

One solution would be to convert your PS games to the .chd format. Converting the PS “disks”, i.e. pairs of .cue and .bin files to the .chd format will result in a single file per disk which is also compressed taking up much less space.

To convert “disks” to .chd download the zip of the software “CHDMAN” below:

https://archive.org/details/chdman

Unzipping the file will create a folder CHDMAN.

In this folder open the batch file called “Cue or GDI to CHD” with a text editor and replace the line:

for /r %%i in (*.cue, *.gdi) do chdman createcd -i “%%i” -o “%%~ni.chd”

with:

for /r %%i in (*.cue, *.gdi, *.iso) do chdman createcd -i “%%i” -o “%%~ni.chd”

This update allows the batch file to work with ISO files too.

Now to convert “disks” simply drag and drop the .cue and .bin files into the CHDMAN folder and then double click the batch file “Cue or GDI to CHD” to run it.

This will produce a single .chd file you can then save to the PS folder of your GarlicOS games directory.

Comparing two tables for equality with Spark SQL

The best way of comparing two tables to determine if they are the exact same is to calculate the hash sum of each table and then compare the sum of hash. The benefit of the technique below are that no matter how many fields there are and no matter what data types the fields may be, you can use following query to do the comparison:

SELECT SUM(HASH(*)) FROM t1;
SELECT SUM(HASH(*)) FROM t2;

Of course if the schemas of the two tables are different this will by default produce different hash values.

How to insert a record with Spark SQL

INSERT INTO tables with VALUES option as achieved with other SQL variants is not supported in Spark SQL as of now. For single record inserts the below example provides two options:

--CREATE test table
CREATE TABLE TestSchema.InsertTest USING DELTA AS (SELECT 1 AS row_id, 'value1' AS field_1, 'value2' AS field_2)

--INSERT INTO test table
INSERT INTO TestSchema.InsertTest SELECT t.* FROM (SELECT 2, 'value3', 'value4') t;

--INSERT INTO test table while aliasing field names
INSERT INTO TestSchema.InsertTest SELECT t.* FROM (SELECT 3 AS row_id, 'value5' AS field_1, 'value6' AS field_2) t;

--Confirm insert
SELECT * FROM TestSchema.InsertTest