So I have an upcoming engineering project I’m working on… I’m trying to optimize an unusual powered propulsion system. I’m still working on a iOS / Android app to take detailed response data, but that’s another story. Right now I’m wondering how I’m going to do the statistical analysis for the testing when I begin to accumulate results. In the old days, I’ve used MiniTab. Awesome software program, but holy moly is it expensive. Ouch. I’m wondering in this day and age is there is an open source alternative. And that’s what this posting is all about. I thought I’d build an experiment, and then attempt to analyse it with open source (read that as “free”) available software.

First off. I wanted an experiment that would be cheap and easy to do. Something that would be easy to understand. Something I (or anybody else) could test easily. And here’s what I came up with.

The Coin Drop Test


Drop a coin over a target, measure the distance from where it lands to the original target center. Obviously the goal is to land the coin as near to the target as possible.

Factors (each factor will have two levels):
Size of Coin (Dime or Quarter)
Height of drop (60″ or 30″)
Coin Release Orientation (horizontal or on edge)
Drop Hand Technique (finger/thumb or two fingers)
Target Surface (exercise mat vs deep pile carpet)
Response:
Distance from dropped coin to target center (C to C, inches)

The target is a piece of masking tape with a simple ‘X’ marked on it. For the target surface I used two different floor surfaces. My initial thought was that the deep pile carpet would preclude coins from bouncing too far away from the target, relative to the gym mat. I used two different easily available coins.. a quarter and a dime. I would have thought the heavier coin might travel less than the dime. Coin release orientation? My initial thought is that a coin dropped “flat” would remain close to the target. Obviously I knew there would be a lot of variation in the test results. I wanted to see where an experiment with lots of variability might go. I designed a full factorial experiment (2^5). In fact I ran each test four times for a total of 128 different tests. I was very carefully to randomize the tests. I used a simple random number generator, then re-sorted the test criteria in an XL sheet.

A couple of observations, notes: It would be best to decide before you start how accurate you want to be in your measurement. Round off to nearest inch? Nearest half inch? Or find a metric tape measure and measure in mm or cm? I started with nearest 1/4″ and that was probably not necessary. There was a lot of variability in the drop technique. Quite a few times, when using the two finger coin on edge method, when using a dime, the dime would stick to my fingers delaying the drop. Its funny, but by paying close attention to the test, you can sometimes spot unexpected trends, something you want to test in the next round of analysis.

And if you want to repeat or modify this experiment, here is the raw data! Do note, there are two columns there to aid in the randomization of the experimental design. Check out the columns original_order and random_order. Sort by one column or the other as necessary. In normal order it’s pretty easy to see how this full factorial experiment was setup.

And what makes the design of experiment / ANOVA so awesome, is you are not making predictions about results… but instead merely observing what happens. You may believe something is true, but this is a way to prove it (or not!) This is an experiment, used to help identify possible further opportunities to improve desired response. You may not know WHY factor A is better than factor B, but in observation something is measurably different between those two factors.

Analysis of Variance (ANOVA) — What is it?

To determine whether the difference in results is due to random chance or a statistically significant different process or factor, an ANOVA F-test is performed. The F-test is a tool used by statisticians to determine if different test observations occur because of randomization or a true difference in outputs based on which input (factor) is in use. The ANOVA F-test uses the null hypothesis that:

H(0): Coin size will have no significant effect on distance to target after coin drop.
H(0): Drop height will have no significant effect on distance to target after coin drop.
H(0): Coin release orientation will have no significant effect on distance to target after coin drop.
H(0): Drop hand/finger technique will have no significant effect on distance to target after coin drop.
H(0): Target surface will have no significant effect on distance to target after coin drop.

Analysis is performed on all the factors one at a time, and then in combination in a decent software package. You could program this yourself (in XL?) but its pretty easy to make a mistake, and not generally recommended. Note: for a nice discussion of the details in how such a program would work, I found this analysis pretty helpful:

And that lead me to start looking at open source software packages that might work.. I looked a whole lot of things. I discover a whole lot of paid applications. Many of those included a 30 day free trial, but I’m really looking for a long term sustainable solution. I ended up looking very close at two different packages, PSPP and R. A very common package in use is the paid program Statistical Package for the Social Sciences (SPSS). Its a very nice program from the folks at IBM, but definitely not inexpensive. GNU PSPP is a program for statistical analysis of sampled data. It is intended as a free replacement for the proprietary program SPSS, and appears very similar to it with a few exceptions. Actually there are quite a few exceptions. You can run a F-test analysis on a single factor, but I was unable to run a complete analysis on a multifactor experiment. Perhaps I was just doing it wrong, but that just didn’t work for me.

Instead I discovered the R project. Yowza, we have a winner. Its not totally intuitive as using my favorite tool Minitab, but with a bit of effort you can get some decent results…

First, download R, then run it. I’m using the R-GUI. In the console paste the following:

datafilename="C:/Users/Username/DirectoryX/dfd_coin_drop_experiment.csv" #tell where the data come from
data.ex1=read.csv(datafilename,header=TRUE, sep = ",") #read the data into a table
aov.ex1 = aov(distance_from_target~coin_size*drop_height*drop_orientation*hand_technique*target_surface,data=data.ex1) #do the analysis of variance
summary(aov.ex1) #show the summary table

The magic happens here. That last column, Pr, is the probability that the stated the null hypothesis is valid. Its is a predictive indicator. Obviously if the probability is low, than we must reject the null hypothesis as stated. For this test, we are going to look at a 95% confidence level. Large values of Pr indicate that our null hypothesis is correct, that for even though there may be differences in calculated means, because of variability in the system, the following null hypothesis is completely valid. H(0): Coin size will have no significant effect on distance to target after coin drop. Pr(>F) = 0.77268 It’s only when those probabilities are very small that differences in the factor have real effect on the response. If Pr(>F) is less than 0.05 those factors are critical to the system response. The anova routine from R includes nice visible indicators for factors that may be significant.

As you scan the Anova results, you can see that the following factors are significant.

  • drop height
  • an interaction between coin_size:drop_orientation
  • an interaction between drop_height:drop_orientation
  • an interaction between drop_orientation:hand_technique
  • an interaction between drop_orientation:hand_technique:target_surface

Do note, that in most experiments, significant dual factor interactions are pretty rare. Generally we are only concerned with single factor elements. That the factor of drop height is significant seems pretty intuitive. Coins dropped from the lower height ended up closer to the target than coins dropped from a higher height. Frankly I was surprised at the other interaction factors here. I suspect there is just a whole lot of variability in what’s going on. And I think some changes should be made to the tested factors, and the experiment re-run. I was also surprised here by some of these results. I was pretty sure we’d see coins closer to the target from the deep pile carpet than the mat, but that’s not what the results reveal. I wonder what would happen if I poured a half inch of beach sand on the carpet and rerun the test? ( Oh, I know the answer to that one, even without testing; my wife would kick me in the butt, and toss me out of the house. )

And let’s quantify the means of each test factor and combinations:

print(model.tables(aov.ex1,"means"),digits=2) #report the means and the number of subjects/cell

Finally lets plot our key factor results. Note that in the following plots, the thick black line represents the median value, the colored box represents the 25% to 75% percentile performance.

Plot — Drop Height

boxplot(distance_from_target~drop_height,main="Coin Drop Test", xlab="Drop Height (inches)", ylab="Distance from Target Center", col=rainbow(7),data=data.ex1)

[Pr(>F) = 0.00226] Its pretty easy to see here that the accuracy to target is better with a low (30″) drop height. Additionally variability is smaller with the 30 inch drop height compared to the 60 inch drop height.

Plot — Drop Height : Drop Orientation Interaction

[Pr(>F) = 0.01241]

boxplot(distance_from_target~drop_height:drop_orientation,main="Coin Drop Test", xlab="Drop Height (inches): Coin Orientation", ylab="Distance from Target Center", col=rainbow(7),data=data.ex1)

Look closely at this graph. You can see why coin orientation all by itself isn’t a significant factor. You can also see why the interaction works the way it does. Was this expected, no way. The data here is observed. But according to the test results, its a valid predictor, given those two factors defined in that way. Obviously if you want to minimize coin distance to target, you’d run your system with coin held on edge, dropped from 30 inches for best results. I will say, when I start to see interactions like these my tendency is to really analyse the system try to figure out what causes these results and adjust (or add) additional factors to further improve performance.

Plot — Drop Orientation : Hand Technique Interaction

[Pr(>F) = 0.02917]

boxplot(distance_from_target~drop_orientation:hand_technique ,main="Coin Drop Test", xlab="Drop Orientation: Hand Technique", ylab="Distance from Target Center", col=rainbow(7),data=data.ex1)

Wait, er what the heck? Look at this graph and the one above it. The results here sort of conflict with the results above. Above your solution was to hold coin on edge, but here, the optimal solution was to hold coin horizontal. Remember I said interactions aren’t all that common. Again, you may have to adjust and / or add additional factors to improve performance. There may well be a better way to solve to optimize the system’s design.

Plot — ETC…

Obviously you can continue to plot out all factors with Pr(>F) less than 0.05…

  • coin_size:drop_orientation Pr(>F) = 0.03362
  • drop_orientation:hand_technique:target_surface Pr(>F) = 0.03734

 

Plot — One factor that wasn’t significant…Target Surface

[Pr(>F) = 0.11383] This was kind of a surprise. I fully expected deep pile carpet to improve coin to target performance. It didn’t have a significant effect at the 95% confidence level. With the median values and variability in these results, no advantage here.

boxplot(distance_from_target~target_surface,main="Coin Drop Test", xlab="Target Surface", ylab="Distance from Target Center", col=rainbow(7),data=data.ex1)

Conclusion

And that concludes our exercise. I’m hoping this example makes sense. My goal was to pick a test example that intuitively gave the user a feel for what’s going on. Did the results meet your expectations? And as for open source software, it seems like R is a winner for our analysis.

Here’s a recent project for a customer that had some unique requirements. It’s for a public brand marketing kiosk type of device. But its much larger than that. In fact, the presentation has one audio output (music) and two unique video feeds. One a large tv flatscreen (with audio) and the other a simple display monitor. We wanted to give the customer a way to update the audio and video presentations.

  • Keep it simple
  • Make it reasonably secure
  • Make it reliable (Power On/Off?)

And here is the solution we came up with.

  1. We’re going to use an Intel Nuc solidstate computer to host a node.js Express server.
  2. The Node Express server API will allow for the upload and management of audio visual files.
  3. The server will play movies on dedicated url.
  4. The server will also play and manage music.We’re using https://github.com/victordibia/soundplayer, a javascript wrapper around a simple MPG123 music player program, that is available for any operating system.
  5. The interface to the server is a simple browser with a local IP address. The system includes a wifi router, but is not connected to the internet.
  6. I used the PureCSS library for the user interface

And so on…

Here’s a screen shot of the raspberry pi admin interface…

One specific issue we wanted to address was the use of Raspberry Pi devices with simple power on / off functionality. We have an approach for that. We programmed each Raspberry Pi to play a movie given as URL address, and repeat. To harden things for power off (never clever for normal Raspberry Pi use) we converted the flash drive system to read-only. Program it once, and run it forever. We’ve also provided an interface to manage the content going to each Raspberry Pi.

Here’s the recipe for the Raspberry Pi build… its pretty big, so you can download the file here… We’re using omxplayer to play the video files. Note, the info for hardening the flash drive came from https://www.raspberrypi.org/blog/adafruits-read-only/ Check the recipe file above for details…

There were a couple of pleasant surprises on this project. At one point I was getting stuck with NodeJS troubles. Not sure what happened, but I was totally unable to get the chrome browser debugger to work with my application. I ended up trying VS Code tool, something I’ve seen before but had never used. Wow. No really, wow. That tool made a whole lot of difficult tasks way easy. Many thanks, Microsoft.

The other surprise I had was related to my need for a wireless keyboard. I found this puppy from the folks at Logitech. It’s a $20 wireless keyboard / touchpad combo. I think its originally designed for sitting on the couch and surfing the web on your TV with this thing on your lap. This keyboard works seamlessly, and works quite well. I’ll use it now for every iot project. The mouse in the photo was a $10 addition, and because it too was from Logitech, one USB mini-dongle worked for both input devices.

So I was working with a customer who needed, among other things a friendly and secure telephone directory. The customer would publish a pocket sized paper pamphlet every year. Paper? Really? There must be a better way.

  • Nobody wants to download an app (from Google Play or iTunes). No way, no how.
  • Users really don’t want another login / password for yet another system.
  • Who wants to update a phone directory every time someone joins or exits the group. Not I, said your software developer.
  • We want a system that is easy to update, easy to maintain.
  • If we are talking about members telephone numbers and contact information, secure, protected access is a critical requirement.
  • Just about everyone has access to a mobile phone. Let’s take advantage of that.
  • Utilize tools in the mobile phone browser that make it easy and quick to do phone calls and text paging, via href=”tel:+1…” and href=”sms:+1…”

I had done a project using Google Apps Script and Google sheets as a data storage medium. There were some issues there.. the script ran in the browser which required an publicly shared Google sheet. Cool idea, if the use case fit that situation. Great for product listings, not great for personal data.

I saw a note somewhere about the new offerings from Google including offerings for Firebase mobile phone authentication, including a healthy free tier. Free tier, wow, I’m all in. As an introduction, Google Firebase is a suite of support tools, that was initially used in support of Android Apps. Google has expanded the offerings to include iOS, Android, and the Web. Setup includes content written for Swift, Objective-C, Java, JavaScript, C++ and Unity !
The available tools include Cloud data storage, data synch services, messaging delivery services, analytics, marketing tools and the thing I’m interested in, Authentication. The pricing for the phone authentication service free tier is pretty awesome: 10,000 / requests free per month.

The tool I’m interested in using is Authentication via phone. A website displays a mobile phone number submit. Google Firebase authenticates the phone number via a mobile sms text page with a confirmation code. As designed this is a pretty awesome way to confirm a true mobile phone number… think of marketing folks who want to harvest new users by their telephone number. (I have a couple of customers very interested in doing this type of harvesting…) And in my case, i’m taking this one step further. I’m adding specific user authentication code via “Custom Claims” by editing the Firebase user data store. The authentication code is buried in the token, provided to the web browser client via Google JavaScript bit of code. Upon receipt of the token, the website creates a AJAX request back to server. The server tests the token, and provides appropriate response depending on the users authority. Here is the reference information on using auth tokens (with custom claims) to restrict access. Note: There is a bit of a restriction placed on the free access tier from Google here. It takes a two way trip from the server to add custom claims to a users content at the Google Auth storage. The free tier limits that to 30 requests per hour. Its fine for a managing small team of users, but no way would that work for a large user base. You’d have to increase to a paid service. I will say, that’s more than fair…

Here is a sketch of the authentication process and data flow:

The whole thing works via JSON Web Tokens. JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.

Do note, this isn’t a perfect high security system. Tokens are stored in browser cache. Anybody who shares phones is sharing the browser content. Its a lightweight security system, but probably appropriate for current usage, a phone directory or secure reference content for a small business’ employees.

Software Tools Used Here:

  • jQuery Mobile. A nice mini framework for this type of application
  • Google Firebase
  • Simple Node Express server. Framework generated via Express application generator
  • Docker used in a Virtual Private Server (proveout via docker on localhost)
  • Ajax call to server on the phone list. The server verifies token and the custom claim, then returns content
  • JSON Web Tokens
  • Google Sheets API v4
  • Mobile browser links for phone calls and text paging, via href=”tel:+1…” and href=”sms:+1…”

User Data Store:

Content is stored in a private shared Google sheet. We use Google Apps Script, and the Sheet API v4 to gain server access to the user data. Again, we’re obtaining this info from the node.js server, not from the browser. What really makes this nice is non-technical folks can easily keep the data updated via Google Drive. The data can be easily updated from a mobile phone to add new users.

Possible Enhancements:

Although I haven’t analysed all the how-to details, it should be possible to send out notifications to all users in the system via SMS messages. I can see where this could be very helpful. Another possible enhancement, more oriented to reference data, is using Progressive Web Apps to control the storage of protected data in cache. I have one customer possible application for this technology, where users are sometimes not located in an area with mobile phone reception.

Conclusion:

The tools available from Google are pretty awesome. Firebase can be used to supplement a whole lot of applications. In this case we are using the mobile phone authentication tools to authenticate and restrict data access to a small group of users. Google Apps Script, the support tools for Google Drive applications (including Google Sheets, online spreadsheets) is way helpful for prototyping applications, or applications where non-technical folks can keep private data easily updated.

So, I’m doing some very involved woodworking, including glueing up these complicated pieces. The problem of course. is you only have so long to get things aligned and clamped up before the glue sets up. If you are not organized, or something doesn’t just fit right you are in trouble. Generally I always purchase slow set glue, so my working time is around eight to ten minutes. I will say, if things don’t go together by minute nine, then I rip the whole thing apart, let the glue dry on individual components, then come back and clean things up and try it again. With woodworking its pretty easy to fix woes (take one more cut, use a block plane or if necessary fill a gap with epoxy/sawdust mix and recut).

With that in mind, what I really want is a audible count up timer. When I’m in glue up mode, I don’t have time to be looking at the clock. Give me an audible and we’re good. At first glance I didn’t see anything off the shelf, that did this so I thought hey, I’d just write my own. And the easiest way to do that is in, you guessed it, Javascript in the browser. Without further ado, here is an audio timer!

Try it for yourself, click here.

Code pretty straight forward, the only real trick is the push/pop array for the setInterval calls. Without that, multiple button presses will drive you pretty crazy. Happy glueups!


script type="text/javascript"
     // global variable
     var x = []; // container for setInterval implementations
     var startDate;
 
     // functions defined here...
     function done() {
         clearInterval(x.pop());
         document.getElementById("demo").insertAdjacentHTML('beforeend',"Timer Stopped...");
         console.log("All Done x: ", x);
     }
 
     function startTimer() {
         // Set the date we're counting down to
         startDate = new Date().getTime();
         var msg = new SpeechSynthesisUtterance("Start Timer");
         window.speechSynthesis.speak(msg);
 
     // Update the count down every 1 second
     var tempX = setInterval(function() {
     // Get todays date and time
     var now = new Date().getTime();
 
     // Find the distance between now an the count down date
     var distance = now - startDate;
 
     // Time calculations for days, hours, minutes and seconds
     var days = Math.floor(distance / (1000 * 60 * 60 * 24));
     var hours = Math.floor((distance % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60));
     var minutes = Math.floor((distance % (1000 * 60 * 60)) / (1000 * 60));
     var seconds = Math.floor((distance % (1000 * 60)) / 1000);
 
     // Display the result in the element with id="demo"
     document.getElementById("demo").innerHTML = minutes + "m " + seconds + "s ";
 
     if (seconds % 15 == 0) {
         if (minutes == 0) {
             msg = new SpeechSynthesisUtterance(seconds + "seconds.");
         } else {
             msg = new SpeechSynthesisUtterance(minutes + " minute and " + seconds + "seconds.");
         }
             window.speechSynthesis.speak(msg);
         }
 
             }
         }, 1000);
         x.push(tempX);
         console.log("start timer x: ", x);
     }
end of script

So GoDaddy did another zinger on me, and this one hurts. I have no idea why they changed the hosting server, but they did so without asking me first. Note: The did send an email (just one) telling me about the change. The email was titled: Hosting Account Update.

Hosting account update


Congratulations! You are now able to publish to your updated hosting account.
Please note that it can take up to 24 hours from the time the update was completed for any newly-published content to be visible on your website.
DNS NOTE: Your account has been assigned the new IP address, X.X.X.X … 

And yeah, I happened to miss that one email. At no time did I receive a text page, a second email or a confirmation check that I’ve read the email. Instead, my website went down big. And although I use GoDaddy as my domain name registrar, they didn’t do much to ensure that the website stayed up and running. So, no surprise, the website goes down hard. I haven’t been using the site much, but I still rely on it to be operational. I was glancing at my analytics for the site, and was shocked to see, zero hits for the past ten days. Do some research, and figure out what happened. GoDaddy changed the server and IP address. I need to manually update CloudFlare to match. I get the site up and running again in a few minutes.

Now lets be reasonable here. No, I don’t expect GoDaddy to update CloudFlare for me. But, remember this change was NOT made at my request. The change was done without my knowledge. I still don’t understand why they made the change. And because of that, I expect GoDaddy to take some care to ensure that the customer (me) isn’t adversely affected by the change. I would have thought a contact with a request for confirmation, or a contact over dual or even triple communication methods to be more effective. If a customer doesn’t contact GoDaddy to confirm receipt of the email, then its time to pull out the phone and make a call. GoDaddy could clearly see that the old IP address wasn’t registered on their domain name servers under my account, and that no matter what I have to manually change and IP listing on a A record somewhere, somehow.

DogFeatherDesign Analytics

It turns out that is not the end of the story. Apparently the search engines know when a site is active or not. If a site goes dark and then reopens, your SEO is hosed. You no longer get the referrals from the search engines that you used to have. This is a painful lesson. What’s the best fix? I think its time we bid adieu GoDaddy Hosting. Thanks anyway.

I’ve been playing with an Orange Pi Zero. These are pretty amazing products at a pretty inexpensive price point. Do note, there are some oddities here. I’ve been following a pretty impressive article on the Orange Pi Zero written by Luc Small.

Luc talks about setting up the connection to the wifi network, by updating the interfaces file:

$    nano /etc/network/interfaces

Add the following 4 lines to the end of the file:

auto wlan0
iface wlan0 inet dhcp
wpa-ssid ≺Your Access Point Name aka SSID≻          # wpa-ssid myNetwork
wpa-psk ≺Your WPA Password≻                         # wpa-psk myPassword

1966_Twister_Cover

Er, yeah. So what makes this sort of difficult is you can’t add the ability to update the wifi system unless you are able to communicate with the device. It’s like playing the game of Twister, where the comment is “I won’t get off your foot, until you get off my hand. Well I won’t get off your hand until you get off my foot.” Until you are networked to the device, you can’t set up the device so it will work on the network, ugh.

Obviously there are two different ways to set up the wireless connection. With the Orange Pi Zero, you can:

  1. Create a hard wired ethernet connection thru a DHCP router
  2. Create a serial connection via FTDI device
    1. I’d like to talk about that a bit. And here’s my dilemma. I wanted to give a public presentation showing an Orange Pi Zero in use, with a SSH connection and a VNC connection. The problem was that public venue had an odd WiFi system. At that particular venue, there was an “Open” network without traditional WPA-PSK password. I had a heck of a time getting the system up and running. I want to share what I’ve learned.

      I was able to bring in an old wifi / switch router to the venue. I was successfully able to ethernet connect between my laptop, to the router, to the device. Because it was my router, I could use the browser on the laptop to look at the DHCP routing table, and determine the IP address of the hard wired Orange Pi device. From there, I could use my terminal command line tool to SSH over to the device. I was easily able to login, and set up wifi communications on the Pi. I will say, lugging around a router, two ethernet cables, and a separate power supply is kind of a hassle. I wanted to know if there was an easier way. I ended up trying to use a FTDI serial cable to communicate. I followed the instructions from Luc Small. This worked rather well.

      One difference for me: I was using a Mac Laptop, where Luc is using windows. To enable the serial communications, I used the native Mac Terminal program. First you want to identify the available ports.

      $ ls /dev/tty.*          # to see all available ports.
      You can then use the $ screen command to to establish a simple serial connection. Note: you are going to want to set up the serial connection / terminal / screen BEFORE you power up the Orange Pi.

      $ screen ≺port_name≻ ≺baud_rate≻          # to create a connection
      In my case the screen command looked like this:
      $ screen /dev/tty.usbserial-AE00BS5L 115200

      A few more comments. First, the open network setup. Remember the sample format from Luc Small just won’t work at the facility I was visiting. I did find this reference. I was able to get the open network setup to work by modifying the /etc/network/interfaces
      to:
      auto wlan0
      allow-hotplug wlan0
      iface wlan0 inet dhcp
      wireless-essid publicWifiNetwork

      Do note that although connected successfully, the DNS lookup didn’t work. We added our own DNS lookup links to the image, thank you Google.

      NMTUI

      Another surprise: There appears to be a much better way to set up a virgin installation to a wifi network. Instead of modifying the contents of /etc/network/interfaces, you can also use the $ nmtui or $ nmcli commands. The nmtui command is particularly easy to use. Follow the prompts, select your network, type in a password when prompted to do so. From what I can see, this command installs a separate connection data stored in the directory: /etc/NetworkManager/system-connections. Each connection gets its own file. The attached image shows three different screens from the $ nmtui function. Note the simple text based wifi power strength meter.

      Yet one more surprise. I did a whole lot of testing with the FTDI serial communications. I tested two different operating system images:
      Armbian_5.25_Orangepizero_Ubuntu_xenial_default_3.4.113.img and Armbian_5.25_Orangepizero_Debian_jessie_default_3.4.113.img . There were definitely some differences there. When using Debian Jessie, I had some difficulty with $ nano and $ nmtui commands in serial communication mode. The commands fail to display correctly. You can still make things work, barely, as long as you know which keystrokes will work, but the display doesn’t always look quite right. No problem at all when there is a hard wired ethernet connection. Ubuntu Xenial image didn’t have this issue. Not sure what is going on there.

      Many thanks to Luc Small for his posting. Stay tuned for more Orange Pi projects on this site…

gcpSo I’m working on a project for a new customer. There is one aspect of the project that makes it attractive to store the data on one of the big three cloud systems (Amazon Web Services, Microsoft Azure or Google Cloud). I need a https route to catch Ajax submits from a single webpage. This is a small application, but if the money is right perhaps we can make this work.

I do a bit of research, and I was pleasantly surprised to see that Google offers a pretty decent free tier. Hmmm. Well that looks cool. So I do a bit of coding. I create a decent node.js server application with access to a Mongo DataBase. That works for me.

So the problem is the pricing I’m seeing on the console don’t at all match my inexpensive expectations. I end up doing a whole lot of research, trying to figure out exactly where the charges I’m seeing are really coming from.

So here is the deal. The free tier for Google Cloud App Engine Standard Environment only (Java 7, Python 2.7, PHP 5, Go). It does NOT apply to Google Cloud App Engine Flexible Environment (Node, Java 8, Python 2.7/3.5, PHP 5/7, Ruby, Go, Custom Runtimes). If you use a flex environment, the minimum number of instances is set at two. And if you add those costs up, it comes to $1.26 per instance per day… no matter how many web hits the site sees. For a 30 day month, that comes to $75 as month just to sit at idle. Add an other $6/ month for the mongo db. Cool service yes. Low cost, no.

But I will say… I’m not giving up yet. PHP or Java or Python or Go are still possibilities. Stay tuned.

xSimso3

So I’m always continuing to improve my skills. Currently I’m working on a Coursera Class. This one happens to be Real Time Operating Systems. I’m working on an interesting project involving real time analysis of sound to generate dynamic light output. There is a simulator that I want to run to support the class, in this case a scheduling simulator designed for the study of real-time scheduling algorithms. The simulator is SimSo, and the specific program I want to run, is Simso Web, a full browser interface for SimSo. The code for Simso-Web is offered at Github, and uses both JavaScript and Python as a Served HTML web page. Unfortunately, you can’t just run the webpage via a simple Open-In-Browser interface. That web page has a whole lot of Ajax calls going on. And Ajax calls without a running server generates a whole lot of critical errors “XMLHttpRequest cannot load …content… Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.” What to do, what to do? In the old days, I’d add an XAMPP server to my computer and work with that. And that would install a program, and a whole bunch of content, who knows where.

virtualbox1

But in the ever changing world of software, there is something better available. Enter VirtualBox from our friends at Oracle. So what is VirtualBox you ask? Good question. A virtualbox is an implementation of a virtual operating system installed as an application. VirtualBox allows additional operating systems to be installed on it, as a Guest OS, and run in a virtual environment. And like anything else, that may not be obvious until you see it in operation.

ubuntuServerChoices
ubuntu_server_home

So here’s the deal.

  • Open up VirtualBox, the Virtual Machine manager from Oracle
  • Download Ubuntu Server.
  • Spin up a Ubuntu Server O/S. This is pretty straight forward, but you do need to enable Bridge Mode for your network connection: Virtual box, settings, network, Bridged Adapter. This will enable you to use IP addressing to get to the server from a browser on any computer within your local network.
  • Spin up a LAMP (Linux, Apache, MySQL, PHP) server.
  • Look for your IP Address $ ifconfig
  • Open up the Apache Server @ given IP address
  • Figure out the directory structure for Apache HTML web server content. The address is shown right there in the Apache Ubuntu Startup Page $ var/www/html/
  • Go to that directory, clone the github repository of interest.
  • git clone https://github.com/MaximeCheramy/simso-web.git
    cd simso-web/submodules/simso
    git submodule init
    git submodule update

  • Open up the link in the browser. Success!! See top photo on this blog entry.

Other uses:

ubuntu_and_pixel
–Check out that image on the right… that’s one virtual machine running Ubuntu Desktop, while another machine runs Debian Pixel
–Create a Ubuntu Desktop. Add a shared drive. This is a great way tool to use if you ever need to use a torrent to download a large file. Transmission, the Linux torrent tool, works great, and doesn’t expose your desktop or laptop operating system to potential problems.
–This is great way to test different flavors of an operating system. Not sure if you want Ubuntu desktop or Lubuntu or Edubuntu or Mate? Try ’em all. Quick, easy, done. Toss out what you don’t need without leaving extra installations on your desktop or laptop. Clean, seamless. Move along.
–Android Emulator. Run the Genymotion android emulator within VirtualBox. I was playing around with the native emulator that comes with Android Studio. The laptop I was using wasn’t the newest, or latest or greatest. The native emulator was slow. Brutually slow. And flaky. And fragile. Total pain in the neck to work with. The Genymotion emulator, wow. That thing spun right up. And I was testing Android apps with React-Native in no time. Note, the free version of Genymotion emulator is for personal use only. But its pretty nice.
–Debian with Pixel. Perhaps you want to understand what Pixel is, and how it works. Pixel is an ultralightweight interface to the Debian O/S, generally used on devices like a Raspberry Pi. Now released for PC or Mac. Give it a try.
–Create an older operating system to be able to run old, very old software. In my case, I have a customer whose business depends on software written in 1995 that only runs on PowerPC Apple devices. The underlying software has changed considerable, and he doesn’t have a copy of the source code. The customer has to keep old hardware around to keep his business running smoothly. While I’m duplicating the functionality on more modern software (HTML, JavaScript, Node.js) I’d like to understand how the old stuff works. We’re trying to avoid having to retrain his entire crew on managing something new. Note: I’m still working on this. This is one virtualization I’ve not yet been able to run successfully. So yeah, there is this way old Mac sitting in the corner of my office, grrrr…
–Run GnuCash (Open Source Accounting Software) on a MacBook.
–Try alternative Open Source DataBases… Cubrid or Firebird or MariaDB or…
–Test that voice activation program you’ve always wanted to install on a Raspberry Pi, but don’t have the desk space for all that hardware for your design and coding phase.

So recently I’ve been playing around with Raspberry Pi type Internet of Things (IoT) devices… In my case I’m experimenting with OrangePi Zero and the C.H.I.P. Both of these are way small, way powerful processor headless computers. I’m trying to do fun and interesting things using these as control devices for special input/output. One of the things I’m interested in is ultra low latency… that is, an absolute minimal amount of time between user input and computer output.

I found an interesting article comparing the use of different programming languages to control things on a Raspberry Pi. Basically the guy hooks up an oscilloscope to and I/O pin, and then turns the pin on and off via different programs / shell commands. Its clear from that test, if you want to control things quickly, go to C for the win. So C it is.

I started playing around with code. I started playing around with different hardware.

First go was with the Orange Pi Zero, with Armbian Linux Server Image (Debian Jessie Legacy v 3.4.113 ) First test was to be using simple General Purpose Input/Output (GPIO) to blink some LED’s. It turns out the Orange Pi is a little bit off standard. To get the GPIO to work, we need to use a modified WiringPi library, courtesy of Github and user Zhaolei.

Orange Pi Zero ImageHere’s a photo of the Orange Pi Zero, an add-on shield with extra USB connections, an audio jack, a microphone and an infrared receiver. I also made a couple of LED jumpers for easy blinky I/O testing. In my case I wanted to use the on board microphone and audio jack… they work pretty well.

Let’s talk a little bit about software. With this device, I always planned on a headless installation. Generally my only contact would be thru SSH, either from desktop / laptop (Git Bash Command Line Interpreter) or mobile phone (with either the Termius or WebSSH app.) Coding consists of me writing original in Atom.io, then copying the code and pasting it into .c files via the nano editor. Compilation and shell run commands are run via the SSH tool.

I ALWAYS want a backup copy of the software I’m writing on my laptop/desktop. The only downside is there is no great way to delete a large block of code with nano. Frankly its easier to delete the file and copy/paste new again.

SSH Version:

  • Setup the remote device. Probably best to see that things are updated, via $ sudo apt-get update followed by $ sudo apt-get upgrade.
  • Verify Git is installed on the remote device. If not there, add it with $ sudo apt-get install git-core
  • Install the WiringOp (Orange Pi) library, via $ git clone https://github.com/zhaolei/WiringOP.git -b h3
  • Compile that library on the Orange Pi via $ cd WiringOP $ chmod +x ./build $ sudo ./build
  • Create a test file at the user directory (~/) via $ nano GPIO.c Copy, paste the code from Atom.io. Exit nano, save the file.
  • Compile the test file… $ gcc GPIO.c -o GPIO -lwiringPi The gcc -lwiringPi command links content from the wiringOP library.
  • Finally run the compiled file via $ ./GPIO . If you did this right, you should see some blinking LED’s. Yipee.

So far, so good. Now its about this time, I’m realizing that I will need to be writing some pretty involved programs. And one thing I don’t have is the ability to debug my code interactively in a convenient method. Its about this time in my research that I stumble over this posting, Visual C++ for Linux Development Wow. No, really… Wow. You can use Visual Studio to manage code, keep copy on your laptop/desktop computer, push code to remote device, AND run code in debugger mode. Way cool. The system uses GDB (the GNU DeBugger) to manage the process remotely.

Now at this point, I spent a heck of a long time trying to understand how Visual C++ for Linux development actually worked. I had more than a few problems, and I couldn’t tell if my troubles were based on my custom libraries or my Visual Studio setup. The answer here was to go back to basics, run the thing one step at a time and see that everything worked well. Now my first off-standard was my choice of the Orange Pi Zero. For a Visual Studio proveout, I reverted back to basics, and used a Raspberry Pi Model B. I set up a clean install.

  • Clear SD card via SDFormatter tool (Format type = Full, Format size adjustment = On). I used a 16Gb Samsung Evo MicroSD card in holder.
  • Download Raspbian Jessie Lite Minimal Image from raspberrypi.org
  • Unzip it. Push OS to SD card via Etcher.io
  • Update/Upgrade the OS. Add Git (same as above).
  • Add the WiringPi library to the remote device per these installation instructions.
  • $ git clone git://git.drogon.net/wiringPi $ cd ~/wiringPi $ ./build
  • You can verify the install via $ gpio readall which creates a handy pinout identification map.

gpio_readall

At this point its time to start up Microsoft Visual Studio. I’m using Visual Studio Community 2015, version 14.0.25431.01 with Update 3.

  • You are going to want to download and install the Visual C++ for Linux Development extension.
  • Install a few tools on the remote device $ sudo apt-get install openssh-server g++ gdb gdbserver
  • Add a few LED’s to your Raspberry Pi. I added one LED to wiringPi Pin #0 and another to wPi pin #1.
  • For me, this added one LED to actual pin 11, GND to pin 9 and one LED to pin 12, GND to pin 6. Verify that you have the LED’s oriented in the correct direction.
  • Create a new Project. Select Templates –> Visual C++ –> Cross Platform –> Linux
  • For this quick test, select ‘Blink (Raspberry)’. Accept the defaults, with one exception. Give the file name a .c suffix (and not a .cpp suffix)
  • You will have to set up the program as a ARM processor program, with a pull down selection in the top menu bar.
  • At some point you will have to add login credentials, via (Top Menu) Tools –> Options –> Cross Platform –> Connection Manager.
  • You can observe output via (Top Menu) Debug –> Linux Console.
  • When you click “Remote GDB Debugger” Visual Studio performs the compilation and execution processes.

Visual Studio creates the following files on the remote device (in this case, my Raspberry Pi). Project = Blink, code = main.c

    projects directory
        Blink directory
            bin directory
                ARM directory
                    Debug directory
                        Blink.out file
            obj directory
                ARM directory
                    Debug directory
                        main.o file
            main.c file

And that Blink.out file, it is fully executable via ssh and $ ./Blink.out . Note that the WiringPi library is located elsewhere on the remote linux device. If you inspect the sample code carefully, you will note two things.

  1. Right click on the Blink project in the Solution Explorer. Choose Properties –> Linker –> Input. In the block entitled “Library Dependencies” you will note ‘wiringPi’ This is the command line that tells the system to look for that library on the remote device. The files needed are actually located at /usr/local/lib (normally xxx.so files). Note: there is one thing here I wasn’t very happy about. What if you’ve neglected to compile the library files correctly? If you do that, you get an error message “fatal error: wiringPi.h: No such file or directory”. Wait, what? For a missing file on a remote device, that error message seems to be lacking, and probably should be improved. It’s not immediately obvious that the error is for code content on the remote device, instead, you are wondering what you did wrong on the desktop/laptop machine. My recommendation is “fatal error: wiringPi.h: No such file or directory at remote Linux device.” or words to that effect. The folks at Microsoft seem to agree, and as a result of my email to them, they’ve added this to their open issues list.
  2. The other thing of note on the Blink project is all the notes involving // LED Pin - wiringPi pin 0 is BCM_GPIO 17. So in the history of Arduino and Raspberry Pi’s there have been a whole lot of implementations of GPIO pin numbering. The whole WiringPi GPIO thing gives you the chance to custom define different pin schema’s. I will admit for most of us the whole thing is confusing. In this example, its very easy to get pin #0 to function. Starting from the code as written, its way difficult to get pin #1 to function. In this case BCM_GPIO pin #17 = WiringPi pin #0 = physical Pin #11, but what pin is used for WiringPi Pin #1? Hint, it you use the mating BCM_GPIO pin (#18) that is a total fail. Why? Because you have to declare each BCM_GPIO pin special via the Property Pages –> Build Events –> Remote Post-Build Event command. Yes it is important to understand that process, but not, Not, NOT for a beginner exercise. I wasted a good portion of time trying to understand the entire numbering scheme. For 99% of us, its just best to understand our hardware, ssh $ gpio readall to obtain a pin map for your hardware, and then simply use the WiringPi (wPi) pin numbering.

And heck, to make things easier, I’ll include my beginner approved code sample, RaspberryPiBlink.c (suggest you right click, and save document).

And by the way, just so we don’t forget why we’re using Visual Studio with the Raspberry Pi for programming… with the Gnu Debugger installed we can STEP DEBUG our program on the REMOTE device from within Visual Studio on the laptop/desktop easily. You can see a variable’s content. You can see the order of processing for complex calls. This is way cool, and well worth the tiny bit of extra effort it takes to get everything set up smoothly.

Raspberry Pi Blinky

The more I know about statistics, the more I despise sports.
red-marbles
I’m a big Dr. Deming fan. You remember Dr. W. Edwards Deming, right? He was a foremost statistician evangelist lecturing to manufacturing folks a few years ago. I was fortunate enough to see him in a lecture in Dearborn once. He did this cool thing with the audience. He asked ten volunteers to step up on the stage. He told them they were new employees in the Red Marble Company. The goal of the Red Marble Company was to… produce red marbles. Dr. Deming had this big bucket full of red and white marbles. He held the bucket up high, and each employee was asked to select ten marbles. The employees reached high into the bucket (sight unseen) and selected ten marbles. When all the employees got ten marbles, Dr. Deming did a tally on the number of red and white marbles selected by each employee. One lucky employee, Susan, produced 7 red marbles. Dr. Deming heaped lots of praise on Susan, gave her extra prizes and even a $$ bonus. Susan was so proud, she was beaming on stage. Then Dr. Deming went back to the tally results. One poor soul, Ernie, only produced 3 red marbles. Poor Ernie. No, really poor Ernie. Dr. Deming verbally abused the guy on stage, made him feel about one inch tall. “How could you do so poorly, look at what Susan has done?” This went on for sometime. You could actually see the guy cringe on stage. Everybody in the audience was quite uncomfortable with Ernie’s beat down.

But the message was CLEAR. The results of each employees marble selection were absolutely random. And it made zero sense to reward Susan and punish Ernie for what was clearly a series of random events. Woe is Management. Look at significant differences between employees, but account for randomization for exactly what it is.

World Series Cubs Indians Baseball

And lets fast forward to game seven of the 2016 World Series. I don’t normally watch sports on TV, but this game seemed special. I’ve lived in both Cleveland and Chicago for at least five years each. I know how important a win like the World Series can be to each of those cities. The game was fun to watch, well played by two very awesome teams. It went ten innings, each of them on the edge of your seat exciting. And at the conclusion of that game, the winner would be declared World Series Champions. As I watched the game, complete with DVR with rewind, and presenters with paint on the screen strike box, I noticed there were many “questionable” calls made, that I’m not sure I agree with. There were numerous pitches that looked like strikes, but called as balls, and vice versa. I know one scoring runner from the Cubs took his base on a 3-2 count that I believed to be a strike that was called as a ball. There may have been one base runner event (out? not out?) that didn’t hold well to instant replay, but wasn’t reversed… I will say the questionable calls went in both directions, one time supporting the Cubs, and another supporting the Indians. Shit happens. If any one of those events had gone the other way, we could have had a different final result. They’d be celebrating in Cleveland instead of in Chicago.

Am I complaining about the umpire staff? No way. Tough job, and I certainly wouldn’t want the job. But when I step back and look at the game as a whole, these events seemed random. Random enough that it reminded me of the Dr. Deming Red Marble Company. Two great baseball teams, both with high hopes and aspirations… but with only one declared the winner. In my view, that win, at least on that game, was largely a coin flip. And that’s why I gotta say… The more I know about statistics, the more I despise sports.

Baseball photo courtesy of AP Photo/Charlie Riedel

Next Page »