PL/SQL Query to view tablespace use


I love making tools that help others. What better use of my time, than to help others save theirs? When I went looking online, there were snippets of info about where to find total space or used space, but nothing that was a finished work of art. I compiled this from all the different snippets I found. It returns a table of all tablespaces, how large they are right now, and how much space they have left. It was also requested to have the AUTOEXTENSIBLE flag included, so that’s there too.

To run this, you need select access to DBA_DATA_FILES, DBA_FREE_SPACE and DBA_SEGMENTS.

I’m going to work on making this a view for users to keep track of the space usage, and proactively address running out of space prior to ETL failing.


Speed testing the MPU9150’s functions using a LinkIt ONE



I am working on a project where I am gathering a lot of data. One of the sources of my data is my 9Dof sensor. Since I am logging at sub-second intervals, I wanted to know how long (the cost of time) each sensor takes to return data, so I make sure to get as many data points, from as many data sources as possible.

I set up this project with the sole intention of determining the cost in milliseconds of the standard example, then try to figure out how to make it faster.


I grabbed my trusty LinkIt ONE and my 9Dof sensor (MPU9150, connecting over I2C), and looked at the example library. I wanted to get data from each of the sensors as fast as possible, and I will do post-capture processing for heading and orientation. I set my sights on these three programs:

  • getAccel_Data
  • getGyro_Data
  • getCompass_Data

I then made blocks of C to run in a loop:

Serial.print(millis() % 1000);
Serial.println(millis() % 1000);

This is my code for checking a single Accelerometer data gathering. I figured all I had to do is try running the same block, with additional runs of “getAccel_Data;” and notations of “”Acc_” so I can to comparisons afterwards. I then take the time difference between the start and end and map it to the sensor read type, and the sensor read count. After that, I subtract the time of the single run to the double and triple run to find how much longer it took for the one or two additional runs.

Accelerometer Tests

Let’s have a look at one set of my results (do note, time is in milliseconds, 1/1000th of a second):

Row Labels Average of Runtime StdDev of Runtime
Acc1 23.307 2.230
Acc2 45.701 3.985
Acc3 68.123 6.254

First thing I see here is that 23.3 + 45.7 does not equal 68.1. This is good, this is what I expect, and am glad to see. My formula for calculating the actual average runtime of getAccel_Data is as follows:

( (Acc2 – Acc1) + (Acc3 – Acc1) ) / 3

This is set up to give a double weight to the difference between Acc3 and Acc1, since there was an additional run compared to Acc2 to Acc1.

Testing All Sensors

I created a few sets of functions besides the stock ones to test out.

  • getAccelGyro_Data: Combined getAccel_Data and getGyro_data, eliminating duplicate runs of accelgyro.getMotion9
  • getDof_Data: Combines getAccelGyro_Data and getCompass_Data
  • getRawDof_Data: getDof_Data without the calculations, just raw values

Ultimately, I let my computer run, and I got 4600 records for each set of sensors. Here’s my results:

Function Runs/cycle Average StdDev
getAccel 1 23.307 2.230
getAccel 2 45.701 3.985
getAccel 3 68.123 6.254
getAccelGyro 1 23.364 1.998
getAccelGyro 2 45.844 3.899
getAccelGyro 3 68.235 6.166
getGyro 1 23.303 2.005
getGyro 2 45.729 3.924
getGyro 3 68.192 6.200
getCompass 1 11.231 2.411
getCompass 2 21.774 4.139
getCompass 3 32.368 6.095
getDof 1 33.926 3.493
getDof 2 66.875 7.062
getDof 3 99.803 10.856
getRawDof 1 33.945 3.418
getRawDof 2 66.967 7.083
getRawDof 3 99.951 10.899

From this, I calculated the following:

Function Actual Cycle Cost Overhead Cost
getAccel 22.4038 0.8983
getGyro 22.4380 0.8591
getCompass 10.5597 0.6631
getAccelGyro 22.4504 0.9287
getDof 32.9420 0.9872
getRawDof 33.0094 0.9418


  1. The library should not have “getAccel_data” and “getGyro_data”, rather just combine the two together for optimal performance.
  2. Also, when we look at the ‘getDof” vs. “getRawDof”, by saving the larger values instead of reducing their size before saving, it actually costs time. About 0.0674 milliseconds per cycle.
  3. You save 22.4596 milliseconds per cycle by combining gathering the sensor data into one procedure.
  4. You save 0.0681 milliseconds by running “getDof_data” instead of “getAccelGyro_data” and then “getCompass”.

Pringles Can antenna with a LinkIt ONE


With my never-waning obsession with wardriving, I wanted to make something that played off it that anyone could use. I’ve seen can antennas over the years, and wondered about making one.
So I did. And here it is:


  • Zip ties
  • Pringles can
  • Linkit ONE
  • Grove RGB screen
  • Big paper clip

How I made it:

First I looked at how the cantennas were made online. They were not only putting the antennas in the can, but they were actually making a Yagi. This was something I decided I was not going to do. Too much effort, and not enough time. I put a zip tie through the can right below the nutritional information, as this is where I saw the true builds place their entry point. I connected the zip tie around the outside, and pulled it taught. This would be the base of our antenna’s location.

20150325_092302 20150325_092323


I then attached the antenna towards one of the in-can portion of the zip tie, making sure I had enough of the tail to comfortably attach to the LinkIt without strain. Last thing I needed to do was to break my wifi card. With that in place, I connected the antenna to the LinkIt, and the placed the LinkIt against the can, and put one zip tie on snugly to keep it from going anywhere. I tucked the battery into the left side, allowing just enough wire length to connect it.
The last and final piece is the screen. Here I did a loop of a zip tie through the end of the can, then held it in place with a large paper clip. I felt that the way I did it encompassed the “hacker” mentality of doing what’s needed to get the job done.

The code:

The code I wrote is an adaptation of the wifi scan. There’s three different versions of the code. First is to roll through all the 2.4gHz networks on the screen. The second is to exclude some. Since I work where there are many APs, this was my main code. Then when I did my actual testing, I filtered my results to only display one AP’s signal.

On boot up, I also have the screen show the battery percentage, as I wanted to make sure I had enough juice. I’m still at 100% after almost an hour of testing. The LinkIt is a beast.

Code sauce: Wifi Witch Code

Testing Method:

I wanted to test the effectiveness of the antenna, so I grabbed my second LinkIt (Thank you Seeed!), put it in my coveted Lego v3 case, and had it report my location, along with its signal strength. No reflectors of any kind were used. I had it report to my computer, and I recorded the data into Excel. I set the router up next to the window to give be the best chance to see it with my equipment. No mods were done to my stock WRT54GS, and it was transmitting at its stock wattage (75mWs or so).

20150326_122426 20150326_122404

Testing out in the field:

I literally walked out into a field and parking lot to see how far it would work. The graph below shows my results by distance:

Signal Strength Comparisons



As we can see, there’s a noticeable difference in signal strength at the further distances, with the can providing a higher signal level at all distances (except the fluke at 206m, IDK, maybe becuase I was on a hill?). The main thing to note is that the antenna in the can provided a solid signal, while the unshielded antenna was spotty. Quality of the signal was noticeable. Overall, a Pringles can over your antenna will help you maintain contact from a long distance.

Future plans:

I plan on making a proper logger for the LinkIts so that it records their location along with the signal strength so I can do a scatter plot. This should show the quality of the data, not just the theoretical maximum.

Almond+ unboxing and benchmarking


There it sits. Nothing that outstanding in looks. Nothing outstanding in sound. But it knows it’s better than what you think of it. Under the plastic case, behind the screen, is $400 worth of technology, and the realization of a dream of a few engineers who asked “why not?”


As I looked away from my new Almond+, I saw my WRT54GS and my old Almond. I fell in love with my Almond shortly after my Linksys E3000 overheated. I needed a reliable router, and there it was on Amazon with thousands of 5 star reviews. I can honestly say I am now one of those 5 star reviews. The simplicity, size and performance out of the Almond are fantastic. The screen that I initially thought was a sales gimmick became my main way of administrating my network. Its size is smaller than my cable modem, a SB6120.

Since I was able to score an early model, I decided to do an unboxing and benchmarking for anyone who might have questions as to what this thing can do.

For my test, here’s the setup:

ISP: Comcrap 100 Mbps/25 (25 or 12 Mbps, I’m not sure)

Modem: Personal SB6120 with updated firmware. DOCIS 3.0.

Testing devices: HP Elitebook 8440p with gigabit Ethernet and 5GHz N chipset, Samsung Note3, testing on N bands

Routers: WRT54GS flashed with DDWRT, Almond, Almond+. Testing was done on fastest bands available at < 3 feet.

Benchmarks were performed using’s website in IE or the native Android App. Results were as follows:

Continue reading