PowerShell Nuggets

Standard

Update PowerShell… from PowerShell (source):

iex "& { $(irm https://aka.ms/install-powershell.ps1) } -UseMSI"

Benchmark your code (source):

Measure-Command
[-InputObject <PSObject>]
[-Expression] <ScriptBlock>
[<CommonParameters>]

Multithreading (topic discussion):
PSJobs
Runspace
Foreach-Object -Parallel (have to be PS 7+, so no PowerShell ISE)
Syncronized Hashtable for shared variables between threads (source). There’s a limit of not being able to store newline or “;” since those are the delimiters of the data.


SQL I/O with Invoke-Sqlcmd


Powershell hashtables cannot handle values that have a semicolon “;” or line break “/n” in them (source). Thus, a hashtable isn’t good for storing text blobs. If you need to escape data out of a parallel thread, save it unique file and then spool it up afterwards.


Some useful string resources: https://www.red-gate.com/simple-talk/sysadmin/powershell/working-with-powershell-strings/ . Notably this nugget:

“The solution is to take advantage of scoping and to use parentheses as follows:”

write-host "This is all about $($person1.Name) who is $($person1.Age)"

Websites of useful resources:
https://morgantechspace.com/category/powershell-tips
http://grr.blahnet.com/powershell
https://adamtheautomator.com/tag/powershell/

Calculating a credit card’s checksum in t-sql

Standard

I spent a day looking around to find a graceful checksum calculation for the 16th digit in SQL. Here’s my final version that takes your source table and credit card column and returns your output_dateset.

/*
CC checksum logic
Logic based ON logic FROM these pages:
https://www.mssqltips.com/sqlservertip/3319/implementing-luhns-algorithm-in-tsql-to-validate-credit-card-numbers/
https://www.freeformatter.com/credit-card-number-generator-validator.html#:~:text=Multiply%20the%20digits%20in%20odd%20positions%20%281%2C%203%2C,to%20get%20a%20multiple%20of%2010%20%28Modulo%2010%29

This code is designed in-line so you can feed large datasets through it, rather than doing line-by-line processing.

Put your source table in the source_table CTE.
Define your source_table's credit card column to calculate the 16th number in the CTE definition of population_list for CC_to_checksum.
Output is source_table.*, CC_to_checksum char(15), checksum_number char(1), final_CC_number char(16).
*/

WITH source_table
AS (SELECT DISTINCT
           LEFT(REPLACE(
                           REPLACE(
                                      CAST(RAND(num * 1.4) * 100000 AS CHAR(7)) + CAST(RAND(num * .8) * 135790 AS CHAR(7))
                                      + CAST(RAND(num * 3.14159) * 123456 AS CHAR(7)),
                                      '.',
                                      ''
                                  ),
                           ' ',
                           ''
                       ), 15) AS example_cc_number
    FROM
    (
        SELECT t.*
        FROM
        (
            SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS num
            FROM INFORMATION_SCHEMA.COLUMNS
        ) t
        WHERE num <= 15
    ) gimme_cards --Make example CCs
),
     population_list
AS (SELECT *,
           example_cc_number AS CC_to_checksum --Define your Credit_Card_Number source column (only left 15 characters are taken into account)
    FROM source_table),
     output_dataset
AS (SELECT pl.*,
           k AS checksum_number,
           final_fomatting.CC_to_checksum + k AS final_CC_number
    FROM
    (
        SELECT CC_to_checksum,
               RIGHT(CAST(10
                          - (char_01 + char_02 + char_03 + char_04 + char_05 + char_06 + char_07 + char_08 + char_09
                             + char_10 + char_11 + char_12 + char_13 + char_14 + char_15
                            ) % 10 AS VARCHAR), 1) AS k
        FROM
        (
            SELECT CC_to_checksum,
                   char_01 % 10 + CAST(char_01 / 10 AS INT) AS char_01,
                   char_02 % 10 + CAST(char_02 / 10 AS INT) AS char_02,
                   char_03 % 10 + CAST(char_03 / 10 AS INT) AS char_03,
                   char_04 % 10 + CAST(char_04 / 10 AS INT) AS char_04,
                   char_05 % 10 + CAST(char_05 / 10 AS INT) AS char_05,
                   char_06 % 10 + CAST(char_06 / 10 AS INT) AS char_06,
                   char_07 % 10 + CAST(char_07 / 10 AS INT) AS char_07,
                   char_08 % 10 + CAST(char_08 / 10 AS INT) AS char_08,
                   char_09 % 10 + CAST(char_09 / 10 AS INT) AS char_09,
                   char_10 % 10 + CAST(char_10 / 10 AS INT) AS char_10,
                   char_11 % 10 + CAST(char_11 / 10 AS INT) AS char_11,
                   char_12 % 10 + CAST(char_12 / 10 AS INT) AS char_12,
                   char_13 % 10 + CAST(char_13 / 10 AS INT) AS char_13,
                   char_14 % 10 + CAST(char_14 / 10 AS INT) AS char_14,
                   char_15 % 10 + CAST(char_15 / 10 AS INT) AS char_15
            FROM
            (
                SELECT CC_to_checksum,
                       char_01 * 2 AS char_01,
                       char_02,
                       char_03 * 2 AS char_03,
                       char_04,
                       char_05 * 2 AS char_05,
                       char_06,
                       char_07 * 2 AS char_07,
                       char_08,
                       char_09 * 2 AS char_09,
                       char_10,
                       char_11 * 2 AS char_11,
                       char_12,
                       char_13 * 2 AS char_13,
                       char_14,
                       char_15 * 2 AS char_15
                FROM
                (
                    SELECT CC_to_checksum,
                           SUBSTRING(CC_to_checksum, 01, 1) AS char_01,
                           SUBSTRING(CC_to_checksum, 02, 1) AS char_02,
                           SUBSTRING(CC_to_checksum, 03, 1) AS char_03,
                           SUBSTRING(CC_to_checksum, 04, 1) AS char_04,
                           SUBSTRING(CC_to_checksum, 05, 1) AS char_05,
                           SUBSTRING(CC_to_checksum, 06, 1) AS char_06,
                           SUBSTRING(CC_to_checksum, 07, 1) AS char_07,
                           SUBSTRING(CC_to_checksum, 08, 1) AS char_08,
                           SUBSTRING(CC_to_checksum, 09, 1) AS char_09,
                           SUBSTRING(CC_to_checksum, 10, 1) AS char_10,
                           SUBSTRING(CC_to_checksum, 11, 1) AS char_11,
                           SUBSTRING(CC_to_checksum, 12, 1) AS char_12,
                           SUBSTRING(CC_to_checksum, 13, 1) AS char_13,
                           SUBSTRING(CC_to_checksum, 14, 1) AS char_14,
                           SUBSTRING(CC_to_checksum, 15, 1) AS char_15
                    FROM population_list
                ) parse_characters
            ) pre_sum_conversion
        ) roll_it_up
    ) final_fomatting
        INNER JOIN population_list pl
            ON pl.CC_to_checksum = final_fomatting.CC_to_checksum)
SELECT *
FROM output_dataset;

The idea behind this is that by doing in-line processing, we/you/I can take advantage of the SQL server’s stream processing that it is so good at, rather than creating an in-line cursor.

I have populated the source_table with garbage numbers to show that it works. Please replace that full CTE with your actual view/table.

PLSQL note: You’ll need to adjust at least the % to mod(m,n) and SUBSTRING() to SUBSTR().

Mimicking the MAXXAIR panel for automation

Standard

So I think I mapped the Maxxair rj45 controller out. I’m going to try and wire this in my setup.

Thread of others working on this as well: https://www.fordtransitusaforum.com/threads/maxxfan-rj-45-and-rj-11-jacks.24505/page-1

Panel button mappings:
[on/off]: close between pin2 & pin3
[Speed+]: close between pin1 & pin3
[Speed-]: close between pin1 & pin4
[in/out]: close between pin2 & pin4
[auto]: close between pin2 & pin5
[open/close]: close between pin1 & (two diodes – to button, + from pin3 & pin4)
[panel LED]: + on pin6, – on pin8
Pin7 is not connected, so who knows what logic was happening there.

I’ll post updates as I make a physical controller for this.

Saint Charles SQL Server Meetup

Standard

I did my first presentation on what my career has been for over a decade! For those that would like to relive the experience, here’s the video:

And here’s the different blocks of code used:

--This query gives you a mapping of all the tables and columns
SELECT SCHEMA_NAME(tab.schema_id) AS schema_name,
       tab.name AS table_name,
       col.column_id,
       col.name AS column_name,
       t.name AS data_type,
       col.max_length,
       col.precision
FROM sys.tables AS tab
    INNER JOIN sys.columns AS col
        ON tab.object_id = col.object_id
    LEFT JOIN sys.types AS t
        ON col.user_type_id = t.user_type_id
ORDER BY schema_name,
         table_name,
         column_id;
SELECT OBJ.name AS TABLE_NAME,
       IDX.rows AS ROWS_COUNT
FROM sys.sysobjects AS OBJ
    INNER JOIN sys.sysindexes AS IDX
        ON OBJ.id = IDX.id
WHERE OBJ.type = 'U' --USER TABLES
      AND IDX.indid < 2
ORDER BY 2 DESC;
--This query gives you an idea of the "largest" table
--code examples from https://social.technet.microsoft.com/wiki/contents/articles/25102.how-to-get-row-counts-for-all-tables.aspx
SELECT TABLE_SCHEMA,
       TAB_COL_CNT.TABLE_NAME,
       COLUMN_COUNT,
       ROWS_COUNT,
       CAST(ROWS_COUNT AS BIGINT) * CAST(COLUMN_COUNT AS BIGINT) AS DATA_POINTS_POSSIBLE
FROM
(
    SELECT SCHEMA_NAME(TAB.schema_id) AS TABLE_SCHEMA,
           TAB.name AS TABLE_NAME,
           COUNT(COL.column_id) AS COLUMN_COUNT
    FROM sys.tables AS TAB
        INNER JOIN sys.columns AS COL
            ON TAB.object_id = COL.object_id
        LEFT JOIN sys.types AS T
            ON COL.user_type_id = T.user_type_id
    GROUP BY SCHEMA_NAME(TAB.schema_id),
             TAB.name
) TAB_COL_CNT
    LEFT JOIN
    (
        SELECT OBJ.name AS TABLE_NAME,
               IDX.rows AS ROWS_COUNT
        FROM sys.sysobjects AS OBJ
            INNER JOIN sys.sysindexes AS IDX
                ON OBJ.id = IDX.id
        WHERE OBJ.type = 'U' --USER TABLES
              AND IDX.indid < 2
    ) TAB_ROW_CNT
        ON TAB_COL_CNT.TABLE_NAME = TAB_ROW_CNT.TABLE_NAME
		ORDER BY 5 desc;
--This query tells you what columns are used throughout the DB, ordered from most to least
SELECT COUNT(1) OVER (PARTITION BY col.name) AS times_column_used,
       col.name AS column_name,
       SCHEMA_NAME(tab.schema_id) AS schema_name,
       tab.name AS table_name,
       col.column_id
FROM sys.tables AS tab
    INNER JOIN sys.columns AS col
        ON tab.object_id = col.object_id
    LEFT JOIN sys.types AS t
        ON col.user_type_id = t.user_type_id
ORDER BY 1 DESC,
         2,
         3;

ICOD Framework

Standard

The source is a single table of pivoted data which will be referred to as ICOD_Main. This will allow for all data now and in the future to be loaded. ICOD_Main’s Data Structure:

  • Unit_Id: This is a varchar that is unique to the object, usually a business key for flexibility and long term support
    • Example would be XFMR1234 or CBKR231A
  • Normalized_Data_Point: This is a vharchar that is the end-mapped name of the point
  • Data_Point_Source_System: This is the source system that the data point came from
  • Data_Point_Source_Table: This is the source system’s table that the data point came from
  • Data_Point_Source_Point: This is the source system’s table’s column that the data came from
  • Data_Point_Datetime: This is a full datetime column that has the approved date and time of the record
  • Data_Value: This is a varchar(4000) that has the data point stored. This has all data types (except blobs/XMLs, etc) and the data is converted before and after storage based on the Normalized_Data_Point’s assumed data type
  • Detail_Notes: This is also a varchar(4000) and is an overflow of data, because there’s times that you need a little more contextual information

There are a number of identified data blocks stored inside ICOD_Main:

  1. Test results: this is a single result
    1. Although this seems basic, this is the meat of the ICOD. This allows for quick cube analysis across all units, time, or sets of points
  2. Top level, single result: this is useful for reference information
    1. Here is going to be nameplate or current installation status information, with the idea that this data is not a slowly changing dimension. This is a 1:1 dataset.
  3. Event flags: Useful as an indicator of something happening in history.
    1. Example is Converting logic of a qualitative event into a quantitative point.

When loading data into ICOD_Main, there’s a number of processing steps:

  1. Load what you know (automated)
    1. Go get test data. It’s simple and easy.
    2. Load more complex data points, these may require remapping or recalculation from the test data.
  2. Make a ledger
    1. Store data that comes in from a pipeline that isn’t automatically ELT’d from a source.
  3. Preprocess anything you think would be useful.

When fetching data for analysis, there’s a few logical steps. Please note, I’m going to reference a @rundate. This variable is to allow you to snap back in time, so you would see what the results were back then:

  1. Fetch snapshot data based on the last data points before your @rundate. This equation is a bit tricky, as it’s based on the DB you’re using.
    1. MAX(Data_Point_Datetime) over (partition by Unit_Id, Normalized_Data_Point) as Last_Normalized_Data_Point_Datetime. Then put that in a subquery and filter with a WHERE clause that matches the datetimes, returning the last one. This is annoying to set up, but powerful when done.
    2. The important thing is to focus on the scope of data you’re hunting for. Take the data from the main ICOD table and store it into a smaller set.
  2. Pull what data you need for trending.
    1. These would be <= @rundate
    2. The analysis for these would be done before storing them into your reporting analysis.
  3. In Tri-State’s analysis, we do some post-processing looking for specific indicators, and have those as health delimiters.
  4. Pull your reference data.
  5. Wrap up all the data.
    1. The output really doesn’t matter too much, but the functionality to get to the end is the main point.

Things to hire for:

Standard

Looking at an asset management personality, I have noticed that these are solid personality principles to hire for. Taken from: https://www.transit.dot.gov/sites/fta.dot.gov/files/docs/regulations-and-guidance/asset-management/133286/building-asset-management-teamrtd2019.pdf

Key Attributes of the person:
1. Embrace uncertainty
2. Problem Solver
3. Objectivity and Self-awareness
4. Curiosity
5. Grit & determination
6. Life long learner
7. Credit giver – No big egos
8. People who know they don’t know it all…

What they bring to the team:

  1. Balance of interest in technical and business strategy
  2. Some good system and structured thinkers
  3. Longer term perspective: ‘And then what?’
  4. Emotional intelligence and communications skills
  5. Strong proportion of people who can ‘embrace uncertainty’, probabilistic and scenario thinking
  6. Openness to change – but not novelty for its own sake
  7. Enough leadership skills to get others to buy into our new ways of working

Hierarchy view for rolling up data

Standard

Most Hierarchy code I’ve found relies on having write ability to the source system to write back a hierarchy ancestor table. This is a view to roll up data in a method that is better performing and more flexible than I’ve seen. This should support up to 10 levels of parents. This is designed in an Oracle database, but it should work in any SQL environment. Please make sure there is indexes on the ID columns.

WITH RELATIONSHIP_VALUES AS (
 SELECT [ROW_ID] AS CHILD_ID,
 [ROW_PARENT_ID] AS ANCESTOR_ID
 FROM [DATA_SOURCE]
) SELECT ANCESTORS.CHILD_ID,
 CASE DISTANCE
 WHEN 0 THEN ANCESTOR_ID_0
 WHEN 1 THEN ANCESTOR_ID_1
 WHEN 2 THEN ANCESTOR_ID_2
 WHEN 3 THEN ANCESTOR_ID_3
 WHEN 4 THEN ANCESTOR_ID_4
 WHEN 5 THEN ANCESTOR_ID_5
 WHEN 6 THEN ANCESTOR_ID_6
 WHEN 7 THEN ANCESTOR_ID_7
 WHEN 8 THEN ANCESTOR_ID_8
 WHEN 9 THEN ANCESTOR_ID_9
 WHEN 10 THEN ANCESTOR_ID_10
 END
 AS ANCESTOR_ID,
 DISTANCE,
 CASE WHEN LEAFS.ANCESTOR_ID IS NULL
 THEN 1
 ELSE 0
 END
 AS IS_LEAF
FROM (
 SELECT DISTANCE_0.CHILD_ID,
 CASE WHEN DISTANCE_9.ANCESTOR_ID IS NOT NULL THEN 10
 WHEN DISTANCE_8.ANCESTOR_ID IS NOT NULL THEN 9
 WHEN DISTANCE_7.ANCESTOR_ID IS NOT NULL THEN 8
 WHEN DISTANCE_6.ANCESTOR_ID IS NOT NULL THEN 7
 WHEN DISTANCE_5.ANCESTOR_ID IS NOT NULL THEN 6
 WHEN DISTANCE_4.ANCESTOR_ID IS NOT NULL THEN 5
 WHEN DISTANCE_3.ANCESTOR_ID IS NOT NULL THEN 4
 WHEN DISTANCE_2.ANCESTOR_ID IS NOT NULL THEN 3
 WHEN DISTANCE_1.ANCESTOR_ID IS NOT NULL THEN 2
 WHEN DISTANCE_0.ANCESTOR_ID IS NOT NULL THEN 1
 END
 AS MAX_DISTANCE,
 DISTANCE_0.CHILD_ID AS ANCESTOR_ID_0,
 DISTANCE_0.ANCESTOR_ID AS ANCESTOR_ID_1,
 DISTANCE_1.ANCESTOR_ID AS ANCESTOR_ID_2,
 DISTANCE_2.ANCESTOR_ID AS ANCESTOR_ID_3,
 DISTANCE_3.ANCESTOR_ID AS ANCESTOR_ID_4,
 DISTANCE_4.ANCESTOR_ID AS ANCESTOR_ID_5,
 DISTANCE_5.ANCESTOR_ID AS ANCESTOR_ID_6,
 DISTANCE_6.ANCESTOR_ID AS ANCESTOR_ID_7,
 DISTANCE_7.ANCESTOR_ID AS ANCESTOR_ID_8,
 DISTANCE_8.ANCESTOR_ID AS ANCESTOR_ID_9,
 DISTANCE_9.ANCESTOR_ID AS ANCESTOR_ID_10
 FROM RELATIONSHIP_VALUES DISTANCE_0
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_1
 ON DISTANCE_0.ANCESTOR_ID = DISTANCE_1.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_2
 ON DISTANCE_1.ANCESTOR_ID = DISTANCE_2.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_3
 ON DISTANCE_2.ANCESTOR_ID = DISTANCE_3.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_4
 ON DISTANCE_3.ANCESTOR_ID = DISTANCE_4.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_5
 ON DISTANCE_4.ANCESTOR_ID = DISTANCE_5.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_6
 ON DISTANCE_5.ANCESTOR_ID = DISTANCE_6.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_7
 ON DISTANCE_6.ANCESTOR_ID = DISTANCE_7.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_8
 ON DISTANCE_7.ANCESTOR_ID = DISTANCE_8.CHILD_ID
 LEFT JOIN RELATIONSHIP_VALUES DISTANCE_9
 ON DISTANCE_8.ANCESTOR_ID = DISTANCE_9.CHILD_ID
 ) ANCESTORS
LEFT JOIN (
 SELECT ANCESTOR_ID
 FROM RELATIONSHIP_VALUES
 ) LEAFS
ON
 ANCESTORS.CHILD_ID = LEAFS.ANCESTOR_ID
LEFT JOIN (
 SELECT 0 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 1 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 2 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 3 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 4 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 5 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 6 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 7 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 8 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 9 AS DISTANCE
 FROM DUAL
 UNION
 SELECT 10 AS DISTANCE
 FROM DUAL
 ) LEVEL_DISTANCES
ON
 LEVEL_DISTANCES.DISTANCE <= ANCESTORS.MAX_DISTANCE
ORDER BY 1,3;

Real – Time Planet Tracking System & Trajectory Prediction (Self adjusting pan mechanism [MPU-9250]){RTPT} with Arduino and GPS

Standard

I might have to wire up my telescope with this, using my LinkIt one.

Projects

Introduction

This project aims to make a system that effectively track celestial bodys (such as planets ) with a fair amount of accuracy.We will be using some algorithms along with a processing unit for the calculations and a servo mechanism to show the location of the planet physically!.The hardware used in the project is pretty much basic and simple because the primary focus of this project is on the software that is to make people understand about the algorithms and their implementations.So please bear with my “un-formatted” hardware.

Not just planet tracking you will learn some additional important things that you can implement in your other projects :

  1. Planet tracking using keplers algorithms
  2. Many co-ordinate systems and their interconversion
  3. pan-tilt programming and servo mapping (3.5 turns Servo and 180 degree Servo )
  4. MPU9250 auto-calibration programming
  5. Using Madwicks/Mahony Filter to Stablise Mpu readings.
  6. Yaw correction using P- controller with MPU9250

The…

View original post 201 more words

PL/SQL Query to view tablespace use

Standard

I love making tools that help others. What better use of my time, than to help others save theirs? When I went looking online, there were snippets of info about where to find total space or used space, but nothing that was a finished work of art. I compiled this from all the different snippets I found. It returns a table of all tablespaces, how large they are right now, and how much space they have left. It was also requested to have the AUTOEXTENSIBLE flag included, so that’s there too.

To run this, you need select access to DBA_DATA_FILES, DBA_FREE_SPACE and DBA_SEGMENTS.

I’m going to work on making this a view for users to keep track of the space usage, and proactively address running out of space prior to ETL failing.

SELECT TOTAL.TABLESPACE_NAME,
 TOTAL.SPACE_TOTAL_IN_MB,
 NVL(USED.SPACE_USED_IN_MB,0) AS SPACE_USED_IN_MB,
 AVAILABLE.SPACE_AVAILABLE_IN_MB,
 ROUND(NVL(USED.SPACE_USED_IN_MB,0)/TOTAL.SPACE_TOTAL_IN_MB,6)*100 AS PCT_USED,
 TOTAL.AUTOEXTENSIBLE
FROM
 (SELECT TABLESPACE_NAME,
 AUTOEXTENSIBLE,
 SUM(BYTES)/1024/1024.0 AS SPACE_TOTAL_IN_MB
 FROM DBA_DATA_FILES
 GROUP BY TABLESPACE_NAME,
 AUTOEXTENSIBLE
 ) TOTAL
LEFT JOIN
 (SELECT TABLESPACE_NAME,
 SUM(BYTES)/1024/1024.0 AS SPACE_AVAILABLE_IN_MB
 FROM DBA_FREE_SPACE
 GROUP BY TABLESPACE_NAME
 ) AVAILABLE
ON AVAILABLE.TABLESPACE_NAME = TOTAL.TABLESPACE_NAME
LEFT JOIN
 (SELECT TABLESPACE_NAME,
 SUM(BYTES)/1024/1024.0 AS SPACE_USED_IN_MB
 FROM DBA_SEGMENTS
 GROUP BY TABLESPACE_NAME
 )USED
ON AVAILABLE.TABLESPACE_NAME = USED.TABLESPACE_NAME
ORDER BY 1;

Calculating odds of M&M pulls using PLSQL

Standard

There was a question/challenge posted to a bunch of us nerds, and I decided to go overkill by brute force calculations. The challenge came down to what would the odds be in a 6 M&M pull, of getting one of each color? I wrote a SQL script (PLSQL) that uses the observed color distribution found here and came up with my answer: 1.43267503022458011648%

98.55% of the time you’re not going to get one of each color, or of all the same color.

Also interesting to note is the odds of getting all of each of the colors:

BLUE:       0.0038303288464389083136%
BROWN:  0.0008060825508063215616%
GREEN:    0.0039315653631577133056%
ORANGE: 0.0080050508509564440576%
RED:          0.0008198418170944%
YELLOW:  0.0007791407675257913344%

Code:

WITH COLOR_DIST AS (
SELECT .1836 AS ODDS, 'BLUE' AS COLOR FROM DUAL
UNION SELECT .1416 AS ODDS, 'BROWN' AS COLOR FROM DUAL
UNION SELECT .1844 AS ODDS, 'GREEN' AS COLOR FROM DUAL
UNION SELECT .2076 AS ODDS, 'ORANGE' AS COLOR FROM DUAL
UNION SELECT .1420 AS ODDS, 'RED' AS COLOR FROM DUAL
UNION SELECT .1408 AS ODDS, 'YELLOW' AS COLOR FROM DUAL)
SELECT COLOR_MATCH_TYPES
 ,SUM(ODDS) * 100 AS ODDS
 ,'1:' || ROUND(1 / SUM(ODDS), 0) AS ONE_IN_WHAT
FROM (
 SELECT CASE 
 WHEN ODDS_1.COLOR = ODDS_2.COLOR
 AND ODDS_1.COLOR = ODDS_3.COLOR
 AND ODDS_1.COLOR = ODDS_4.COLOR
 AND ODDS_1.COLOR = ODDS_5.COLOR
 AND ODDS_1.COLOR = ODDS_6.COLOR
 THEN 'SAME COLOR ' || ODDS_1.COLOR
 WHEN ODDS_1.COLOR NOT IN (
 ODDS_2.COLOR
 ,ODDS_3.COLOR
 ,ODDS_4.COLOR
 ,ODDS_5.COLOR
 ,ODDS_6.COLOR
 )
 AND ODDS_2.COLOR NOT IN (
 ODDS_3.COLOR
 ,ODDS_4.COLOR
 ,ODDS_5.COLOR
 ,ODDS_6.COLOR
 )
 AND ODDS_3.COLOR NOT IN (
 ODDS_4.COLOR
 ,ODDS_5.COLOR
 ,ODDS_6.COLOR
 )
 AND ODDS_4.COLOR NOT IN (
 ODDS_5.COLOR
 ,ODDS_6.COLOR
 )
 AND ODDS_5.COLOR NOT IN (ODDS_6.COLOR)
 THEN 'DIFFERENT COLORS'
 ELSE 'OTHER DISTRIBUTION'
 END AS COLOR_MATCH_TYPES
 ,ODDS_1.ODDS * ODDS_2.ODDS * ODDS_3.ODDS * ODDS_4.ODDS * ODDS_5.ODDS * ODDS_6.ODDS AS ODDS
 FROM COLOR_DIST ODDS_1
 CROSS JOIN COLOR_DIST ODDS_2
 CROSS JOIN COLOR_DIST ODDS_3
 CROSS JOIN COLOR_DIST ODDS_4
 CROSS JOIN COLOR_DIST ODDS_5
 CROSS JOIN COLOR_DIST ODDS_6
 )
GROUP BY COLOR_MATCH_TYPES
ORDER BY 1