Support

Blog

Flattr this!

Had an interesting issue trying to install/upgrade to Mavericks on an Apple 4,1 Mac Pro Desktop.

It was running Snow Leopard, but if I tried to install Mountain Lion or Mavericks it would give a disk error on reboot into the installer.

I made a USB installer, booted off that, and could see that disk utility couldn’t see any drives inside the installer.

As the equipment was working fine on Snow Leopard, this wasn’t a hardware issue.

I updated the firmware on the machine to the latest version, but still the same issue.
Cleared NV ram, same.
Compared SMC revisions, AHCI firmware revisions against a good machine, everything was the same.
Checked dmesg on the machine during a Mavericks installer boot, and could see that it was having issues with wifi, then with SATA.

Boot log Mavericks - No internal SATA detected

I checked the other machine, and saw that it had a different wifi card. Bingo!
I removed the wifi card from the non-working machine, and rebooted back into the installer, and it could suddenly see the drives again.

My conclusion is that the Intel ACH10 drivers, and the Atheros ARB5X86 drivers conflict with each other in 10.8 and above. I tested in the 10.10 beta too to be sure, and had the same issue.

Something apple needs to sort out, I guess..

Interesting issue, it took me a little while to troubleshoot.

Flattr this!

I’ve noticed a little spate of password attack attempts via Roundcube – a webmail program we use for mail over at https://mail.computersolutions.cn

Roundcube does have captcha plugins available which will mitigate this, but users will complain if they have to type in a captcha to login for mail.

Fail2ban provides an easy solution for this.

Roundcube stores its logs in a logs/errors file.

If I take a look at a sample login failure, it looks something like the example below

[09-Jun-2014 13:43:38 +0800]: IMAP Error: Login failed for admin from 105.236.42.200. Authentication failed. in rcube_imap.php on line 184 (POST /?_task=login&_action=login)

We should be able to use a regex like:
IMAP Error: Login failed for .* from

However fail2ban’s host regex then includes a trailing ., and fail2ban doesn’t recognise the ip.
I eventually came up with the overly complicated regex below, which seems to work:

IMAP Error: Login failed for .* from <HOST>(\. .* in .*?/rcube_imap\.php on line \d+ \(\S+ \S+\))?$

Lets add detection for that into fail2ban.
First up, we need to add roundcube into /etc/fail2ban/jail.conf

[roundcube]
enabled  = false
port     = http,https
filter   = roundcube
action   = iptables-multiport[name=roundcube, port="http,https"]
logpath  = [YOUR PATH TO ROUNDCUBE HERE]/logs/errors
maxretry = 5
findtime = 600
bantime = 3600

Note that we are not enabling the filter yet.

Change [YOUR PATH TO ROUNDCUBE HERE] in the above to your actual roundcube folder
eg /home/roundcube/public_html/logs/errors

Next, we need to create a filter.

Add /etc/fail2ban/filter.d/roundcube.conf

[Definition]
failregex = IMAP Error: Login failed for .* from <HOST>(\. .* in .*?/rcube_imap\.php on line \d+ \(\S+ \S+\))?$

ignoreregex =

Now we have the basics in place, we need to test out our filter.
For that, we use fail2ban-regex.
This accepts 2 (or more) arguments.

fail2ban-regex LOGFILE  FILTER 

For our purposes, we’ll pass it our logfile, and the filter we want to test with.

eg

fail2ban-regex  /home/roundcube/public_html/logs/errors /etc/fail2ban/filter.d/roundcube.conf  |more

If you’ve passed your log file, and it contains hits, you should see something like this:

Running tests
=============

Use regex file : /etc/fail2ban/filter.d/roundcube.conf
Use log file   : /home/www/webmail/public_html/logs/errors


Results
=======

Failregex
|- Regular expressions:
|  [1] IMAP Error: Login failed for .* from <HOST>(\. .* in .*?/rcube_imap\.php on line \d+ \(\S+ \S+\))?$
|
`- Number of matches:
   [1] 14310 match(es)

Ignoreregex
|- Regular expressions:
|
`- Number of matches:

Summary
=======

Addresses found:
[1]
    61.170.8.8 (Thu Dec 06 13:10:03 2012)
    ...[14309 more results in our logs!]

If you see hits, great, that means our regex worked, and you have some failed logins in the logs.
If you don’t get any results, check your log (use grep) and see if the log warning has changed. The regex I’ve posted works for roundcube 0.84

Once you’re happy, edit jail.conf, enable the plugin.
(set enabled = true), and restart fail2ban with

service fail2ban restart

Flattr this!

Those of you who follow tech news may have heard about the HeartBleed vulnerability.

This is a rather large bug in SSL libraries in common use that allows an attacker to get unsolicited data from an affected server. Typically this data contains user / password details for user accounts, or secret keys used by servers to encrypt data over SSL.

Once the exploit was released, we immediately tested our own servers to see if we were vulnerable. We use an older non-affected version of SSL, so none of our services are/were affected.

Unfortunately a lot of larger commercial services were affected.

Yahoo in particular was slow to resolve the issue, and I would assume that any users passwords are compromised.

We ourselves saw user/passwords ourselves when we tested the vulnerability checker against Yahoo..

We advise you to change your passwords, especially if the same password was used other sites, as you can safely assume that passwords on other services are compromised.

I also strongly recommend this action for any users of online banking.

There is a list of affected servers here –
https://github.com/musalbas/heartbleed-masstest/blob/master/top1000.txt

Further information about this vulnerability is available here –
http://heartbleed.com/

Flattr this!

Looks like Ubuntu 13 has changed the dev id’s for disks!
If you use ZFS, like us, then you may be caught by this subtle naughty change.

Previously, disk-id’s were something like this:
scsi-SATA_ST4000DM000-1CD_Z3000WGF

In Ubuntu 13 this changed:
ata-ST4000DM000-1CD168_Z3000WGF

According to the FAQ in ZFS on Linux, this *isn’t* supposed to change.

http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool

/dev/disk/by-id/: Best for small pools (less than 10 disks)
Summary: This directory contains disk identifiers with more human readable names. The disk identifier usually consists of the interface type, vendor name, model number, device serial number, and partition number. This approach is more user friendly because it simplifies identifying a specific disk.
Benefits: Nice for small systems with a single disk controller. Because the names are persistent and guaranteed not to change, it doesn't matter how the disks are attached to the system. You can take them all out, randomly mixed them up on the desk, put them back anywhere in the system and your pool will still be automatically imported correctly.

So… on a reboot after upgrading a clients NAS, all the data was missing, with the nefarious pool error.
See below:


root@hpnas:# zpool status
pool: nas
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://zfsonlinux.org/msg/ZFS-8000-5E
scan: none requested
config:

NAME STATE READ WRITE CKSUM
nas UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 0 0 0 insufficient replicas
scsi-SATA_ST4000DM000-1CD_Z3000WGF UNAVAIL 0 0 0
scsi-SATA_ST4000DX000-1CL_Z1Z036ST UNAVAIL 0 0 0
scsi-SATA_ST4000DX000-1CL_Z1Z04QDM UNAVAIL 0 0 0
scsi-SATA_ST4000DX000-1CL_Z1Z05B9Y UNAVAIL 0 0 0

Don’t worry, the data’s still there. Ubuntu has just changed the disk names, so ZFS assumes the disks are broken.

Simple way to fix it is to export the pool, then reimport with the new names.
Our pool is named “nas” in the example below:

root@hpnas:# zpool export nas
root@hpnas:# zpool import -d /dev/disk/by-id nas -f

As you can see, our pool is now a happy chappy, and our data should be back


root@hpnas:# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 5.25T 5.12T 209K /nas
nas/storage 5.25T 5.12T 5.25T /nas/storage
root@hpnas:/dev/disk/by-id# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 5.25T 5.12T 209K /nas
nas/storage 5.25T 5.12T 5.25T /nas/storage
root@hpnas:/dev/disk/by-id# zpool status
pool: nas
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
nas ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST4000DM000-1CD168_Z3000WGF ONLINE 0 0 0
ata-ST4000DX000-1CL160_Z1Z036ST ONLINE 0 0 0
ata-ST4000DX000-1CL160_Z1Z04QDM ONLINE 0 0 0
ata-ST4000DX000-1CL160_Z1Z05B9Y ONLINE 0 0 0

errors: No known data errors

Bit naughty of Ubuntu to do that imho…

Flattr this!

Many many moons ago, I saw a KickStarter for something that interested me – a motion controller, so I signed up, paid, and promptly forgot about it.

Last week, I got a notice about shipping, and then a few days later Fedex China asked for a sample of my blood, a copy of my grandmother, and 16 forms filled in triplicate so that they could release the shipment.

Luckily we had all that at hand, and after a quick fax or three later we had a unit delivered to our offices. As pictures are better than words, take a look below:

leap insides

leap box

As I’m not really much for reading instructions, I plugged mine in, and saw that it pops up as a standard USB device (well duh, its usb!).

Screen Shot 2013-07-24 at 2.24.03 PM

It does need drivers to make it work, so off to http://leapmotion.com/setup I went, to grab drivers.

Downloaded, and installed –
Screen Shot 2013-07-24 at 2.40.23 PM

They’ve definitely spent some quality time making sure that things look good.
Well, maybe not; it crashed almost immediately!

Screen Shot 2013-07-24 at 2.41.39 PM

It did popup a message via notifications before it crashed though.

Screen Shot 2013-07-24 at 2.43.04 PM

Reopened their app, and it wants me to sign up. Their app is quite buggy – ran the updater, and it also froze, leaving the updater in the middle of the screen. (Software Version: 1.0.2+7287).
I’m *really* not a big fan of that, so off to find some app’s that I don’t have to download from an app store.

BetterTouchTool http://blog.boastr.net/ has preliminary support for touch ui, so I thought I’d download that first.

Installed BTT, and added some gestures using the Leap Motion settings.

Screen Shot 2013-07-24 at 3.27.39 PM

Not much seemed to be happening – my initial settings didn’t seem to make anything happen when i waved my hands over the device, so went back to the Leap settings.

Leap has a visualizer tool which doesn’t work on my Mac – immediately crashes. Probably as I have 3 screens, and they didn’t test very well.

Screen Shot 2013-07-24 at 3.29.40 PM

So far, not really a good experience. Consistent repeatable crashing in the Leap software.
They do have another tool in the settings – Diagnostic Visualizer, which actually does work.

Screen Shot 2013-07-24 at 3.31.59 PM

Here it is showing detection of 5 fingers.
I still didn’t have any luck with BTT and Leap, so closed, and reopened both software packages, then stuff started working. Again, this reeks of bugginess…

The BTT app specifically states his is alpha support for Leap, and I’m pretty sure that the initial not working part was not his app…

Now that I finally had it working, how is it?

Well, the placement of the sensor is important. It seems like it doesn’t actually read above the sensor, and the i/r led’s are placed at a 45 degree angle facing you, as when I placed the sensor in front of my keyboard and motion above the keyboard I get better results.

Its extremely flaky though – I can’t reliably get it to detect finger movements. You have to try and retry and retry the same action before it works. Its not quite the swipe your fingers over it and it works that I was hoping for. There is also latency in the motion detection.
Initial detection of fingers in app is about 300-500ms before it see’s them. So a swipe over the sensor doesn’t work unless you sit fingers above it then swipe or perform your action.
This really doesn’t help it.

As placement of the sensor is extremely important, I tried a number of different arrangements, but all were pretty similar in reliability. I honestly get about 20-30% of gestures recognized at best. Even with the Diagnostic visualizer running so I could see what the device thought it was seeing, it was hard to reliable perform actions, even when I sat in its sweet spot. My Kinect is a *lot* better at this than the Leap is.

As it stands, this is little more than a tech demo, and a bad one at that.
If I could persuade one of my staff to video my attempts to use it so you could see, you’d understand!

So, this has a long way to go before its something usable, but I do have hope.
I’m sure that the software will improve, but for now this is definitely a concept piece rather than something usable.

I’m not unhappy that I paid money for it though. The interest in this technology has put a lot of investment capital at the device, and it will improve.

That said, don’t buy…yet.

My rating: 2/10

Addendum – my device gets rather hot in use. Not warm. Hot. Noticeably so. Even in the bare 10-15 minutes I’ve had it running.

Not sure how long it will last in Shanghai summers..

Addendum #2 – seems to have cooled down a bit from the rather hot to the touch that it was running at, although now its stopped working completely.

Dmesg shows –

USBF: 1642989.607 AppleUSBEHCI[0xffffff803bb6a000]::Found a transaction which hasn't moved in 5 seconds on bus 0xfd, timing out! (Addr: 6, EP: 2)
USBF: 1642997.609 AppleUSBEHCI[0xffffff803bb6a000]::Found a transaction which hasn't moved in 5 seconds on bus 0xfd, timing out! (Addr: 9, EP: 2)
USBF: 1649135.492 AppleUSBEHCI[0xffffff803bb6a000]::Found a transaction which hasn't moved in 5 seconds on bus 0xfd, timing out! (Addr: 10, EP: 0)
USBF: 1649141.495 AppleUSBEHCI[0xffffff803bb6a000]::Found a transaction which hasn't moved in 5 seconds on bus 0xfd, timing out! (Addr: 10, EP: 0)
USBF: 1649147.499 AppleUSBEHCI[0xffffff803bb6a000]::Found a transaction which hasn't moved in 5 seconds on bus 0xfd, timing out! (Addr: 10, EP: 0)
USBF: 1649153.503 AppleUSBEHCI[0xffffff803bb6a000]::Found a transaction which hasn't moved in 5 seconds on bus 0xfd, timing out! (Addr: 10, EP: 0)

Unplugged it, and replugged it in, working again, but it looks like both the drivers and the ui side need work. This isn’t production ready by any way shape or means.

Flattr this!

Our underlying hardware uses Dell equipment for the most part inside China.
We use Debian as an OS, and Dell has some software available on their linux repo’s specifically tailored for their (often rebranded from other peoples) hardware.
Eg RAC (Remote Access) bits and pieces, RAID hardware, and BIOS updates.

So, enough about why, how do we use their repo?

First add it as a source
echo 'deb http://linux.dell.com/repo/community/deb/latest /' > /etc/apt/sources.list.d/linux.dell.com.sources.list

…then add gpg keys –

gpg --keyserver pool.sks-keyservers.net --recv-key 1285491434D8786F
gpg -a --export 1285491434D8786F | apt-key add -

apt-get update, to make sure that you have the latest repo bits added, then you can install their goodies.

apt-cache search dell will show you whats in their repo. Pick wisely!

Flattr this!

Behind the scenes we use ZFS as storage for our offsite backups.
We have backups in 2 separate physical locations + original data on the server(s), as data is mui importante!
ZFS is a rather nice storage file system that improves radically on older RAID based solutions, offering a lot more funky options, like snapshot’s (where the OS can store have multiple versions of files, similar to Time Machine backups), and more importantly compression.

At some point we’ll be deploying a SAN (storage area network) on a blade server in the data center for data using ZFS, using lots of three and four letter acronyms –
ESXi for base OS, then a VM running providing ZFS storage and iSCSI targets with hardware passthru for other VM’s, then other blades in the server doing clustering.
Right now we’re waiting on an LSI SAS card (see more anacronyms!), so we can deploy…, but I digress.

Back to ZFS.

The ZFS version we use now allows flags, yay!, and that means we can choose alternate compression methods.
There is a reasonably newish compression algorithm called LZ4 that is now supported, and it improves both read and write speeds over normal uncompressed ZFS, and some benefits over compressed ZFS using the standard compression algorithm(s).

To quote: “LZ4 is a new high-speed BSD-licensed compression algorithm written by Yann Collet that delivers very high compression and decompression performance compared to lzjb (>50% faster on compression, >80% faster on decompression and around 3x faster on compression of incompressible data)”

First up check if your zfs version supports it:


zpool upgrade -v
This system supports ZFS pool feature flags.

The following features are supported:

FEAT DESCRIPTION
-------------------------------------------------------------
async_destroy (read-only compatible)
Destroy filesystems asynchronously.
empty_bpobj (read-only compatible)
Snapshots use less space.
lz4_compress
LZ4 compression algorithm support.

Mine does (well duh!)

So…, I can turn on support.

As a note, lz4 is not backward compatible, so you will need to use a ZFS version that supports flags *and* lz4.
At the time of writing nas4free doesn’t support it, zfsonlinux does though, as does omnios, illumos and other solaris based OS’s.

If you aren’t sure, check first with the command above and see if there is support.
Next step is to turn on that feature.

My storage pools are typically called nas or tank
To enable lz4 compression, its a 2 step process.

zpool set feature@lz4_compress=enabled
zpool set compression=lz4

I have nas and nas/storage so I did -

zpool set feature@lz4_compress=enabled nas
zpool set compression=lz4 nas
zpool set compression=lz4 nas/storage

Once the flag is set though, you can set compression on the pools or volume. If you set at the storage volume level, then new pools inherit the compression setting.

Here are my volumes / pools

zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 6.09T 4.28T 209K /nas
nas/storage 6.09T 4.28T 6.09T /nas/storage

I’ve already set compression on (although it doesn’t take effect till I copy new data onto the pools / volumes).
We can check compression status by doing a zfs get all command, and filtering by compress

zfs get all | grep compress
nas compressratio 1.00x -
nas compression lz4 local
nas refcompressratio 1.00x -
nas/storage compressratio 1.00x -
nas/storage compression lz4 local
nas/storage refcompressratio 1.00x -

If I create a new pool you’ll see it gets created with the same compression inherited from its parent storage volume.

zfs create nas/test
root@nas:/nas# zfs get all | grep compress
nas compressratio 1.00x -
nas compression lz4 local
nas refcompressratio 1.00x -
nas/storage compressratio 1.00x -
nas/storage compression lz4 local
nas/storage refcompressratio 1.00x -
nas/test compressratio 1.00x -
nas/test compression lz4 inherited from nas
nas/test refcompressratio 1.00x -

I’ll copy some dummy data onto there, then recheck.

nas/test compressratio 1.71x -
nas/test compression lz4 inherited from nas
nas/test refcompressratio 1.71x -

Nice!

Obviously, compression ratio’s will depend highly on the data, but for our purposes, most things are web data, mail and other things, so we’re heavy on text content, and benefit highly from compression.

Once we get our SAN up and running, I’ll be looking at whether I should be using rsync still or I should look at zfs snapshots -> zfs storage on other servers.
That though, is a topic for another day.

Flattr this!

Rather hacky fix to sort out utf8 latin-1 post issues after export from a rather badly encoded mysql db in wordpress.

UPDATE wp_posts SET post_content = REPLACE(post_content, '“', '“');
UPDATE wp_posts SET post_content = REPLACE(post_content, '”', '”');
UPDATE wp_posts SET post_content = REPLACE(post_content, '’', '’');
UPDATE wp_posts SET post_content = REPLACE(post_content, '‘', '‘');
UPDATE wp_posts SET post_content = REPLACE(post_content, '—', '–');
UPDATE wp_posts SET post_content = REPLACE(post_content, '–', '—');
UPDATE wp_posts SET post_content = REPLACE(post_content, '•', '-');
UPDATE wp_posts SET post_content = REPLACE(post_content, '…', '…');

Not recommended, but I had a use for it.

Flattr this!

Screen Shot 2013-05-26 at 2.59.04 PM

As I have an invested interest in consistent electricity back home (see my other recent post on Solar for details) and have been in discussion with the council about net metering and grid tie, I’ve been doing quite a bit of random reading regarding electricity distribution and its various facets.

Not many of us know that the power company / municipality also uses in-line signalling (aka ripple control) to implement power control and load shedding, so I thought I’d do a little writeup on that.

Many of us have noticed that streetlights don’t always come on, or go off when its light or dark – they appear to be on a timer system.

What most people don’t know is that the timer system controls are actually implemented centrally at substations, and these add signals to the power lines to tell the equipment to turn off / on when instructed.

This is done using ripple control codes.

With ripple control, a small signal is added to the incoming A/C at a distribution location – eg a substation. This signal is read by a special relay in place on the larger circuits (typically the Geyser), and turns power off or on when the electricity company requires – usually when power is scarce, and they need to shed some load.

As this signalling can work on multiple channels, each listening relay can be set to listen to a specific channel, and used to power specific things on / off remotely (e.g. Streetlights).

In South Africa, we use DECABIT signalling to tell things to turn off and on, as well as the older K22 signalling standard.

When load shedding needs to occur, the electricity distribution system needs to act fast to avoid system failures. Most things are automated, and happen in order of timing.
Implementations of the protection mechanisms in place have a specific time to occur – eg a latency. Responses to conditions also have a latency – eg getting additional idle power plants online to provide more power when needed, so its important to the grid to have multiple control and response mechanisms to respond to loads. Each response mechanism also has a different cost impact, so its also important to the electricity provider to best manage these.

A diagram of this is below (Excerpted from http://www.anime-za.net/tech/literature/Enermet_Farad.pdf ):

Screen Shot 2013-05-26 at 3.44.21 PM

For light variances in load, frequency changes as generators speed up or slow down to supply enough electricity to the supply. If there isn’t enough supply to meet load, then frequency drops, and large scale equipment will disconnect until load decreases. This happens almost instantaneously – responses to these issues resolve with a latency of within a few milliseconds to a second. This is called Under-Frequency Load Shedding (UFLS).

Screen Shot 2013-05-26 at 3.38.27 PM

As seen in the diagram above Eskom implements automated under frequency load shedding in an increasing percentage margin based off frequency rates.

(Additional details are in the PDF below)

http://www.systemoperator.co.nz/f3210,36010947/Appendix_A_-_A_Collation_of_International_Policies_for_Under_Frequency_Load_Shedding.pdf

The next set of load shedding is the one we’re interested in – ripple signalling. If the system still has too much load after 1 second, then it sends out a signal over DECABIT to turn off more equipment. DECABIT signalling has a latency of about 7 seconds – a minimum DECABIT signal frame is 6.6 seconds, so this is a second stage response to issues.

As each substation can be connected to up to 20,000 homes/customers (depending on substation load capacity), this allows localized load shedding where its needed, when its needed.

Eskom calls this Demand Market Participation, and has roughly 800MW of systems added into this mechanism. Municipalities are particularly keen on putting loads onto these mechanisms via DECABIT compliant relays, as this saves them peak power fee’s when loads are high – if they can temporarily cut off power to consumers for 10 seconds – 10 minutes for non-essential high loads, then they can substantially reduce what power costs them from Eskom, and make additional profits.

A good writeup on Demand Market Participation is below:

http://www.enerweb.co.za/brochures/AMEU%20Conference%20-%20Enerweb%20VPS%20Paper%20-%20201109%20-%20%20V1.0.pdf

Eskom benefits as they can temporarily avoid adding more infrastructure to cope with growth.
This has been the case for a few years now, but it only delays the inevitable – you do need to invest in infrastructure, not incentivize clients to use less.

Eskom also has a secondary mechanism (using the same theory – lets encourage you to turn off power) called VPS. They have an additional 50,000MW of connections using this on a contractual basis – typically industrial users., and are looking to increase this number.

Its only been through introduction of these mechanisms that we’ve been able to stave off grid collapse. Its gotten so bad, that industrial users have been looking closely at what they can do to provide their own power when Eskom can’t.

Other countries – notably Germany, and the UK, have allowed consumers to become producers, by encouraging localized small scale production of electricity, thus helping the grid without requiring additional investment from the incumbents. This is called net metering – where both inputs and outputs are metered.

Eg – if you have a solar system that provides excess power during the day, it can feed into the grid – (when it needs it most), and they’ll credit you for your participation.

So far, South Africa has been rather reticent to implement this, as the short sighted vision is that its “stealing” from the incumbents profits.

A choice excerpt from that PDF is this –
Residential load can also be incorporated within the VPS, particularly when integrated with Smart Metering systems. Numerous pilot and small scale projects are being undertaken within both Municipalities and Eskom in response to the DOE’s Regulation 773 of 18 July 2008.

The Department of Energies regulation can be found here –
http://www.energy.gov.za/files/policies/Electricity%20Regulations%20on%20Compulsory%20Norms%20and%20standards%20for%20reticulation%20services%2018Jul2008.pdf

These state that all systems over a certain size require that smart metering be installed by 2012. As you may have guessed, quite a few municipalities have not met this deadline, and Eskom has been dragging its feet on that too.

Ironically, introduction of smart metering would actually help the grid here in South Africa, as IPP’s (independant power producers) would make the grid more stable by providing additional energy when needed, and at a lower cost than the incumbents can create it for.

This however does have its issues – most municipalities generate revenues from Electricity, and so are loath to change the status quo, even when it would benefit the country from a whole.

So, its unlikely to be implemented in the short-medium term, unless the government drags them kicking and screaming through the process.

In summation, this –

486837_586695331355335_1633072872_n

Lawrence.
———

References:
http://en.wikipedia.org/wiki/Zellweger_off-peak
http://www.anime-za.net/tech/ripple_index.html
http://mybroadband.co.za/vb/showthread.php/134334-And-so-I-have-proved-the-ripple-control-system-is-buggered
DECABIT Ripple Signal Guide
Thesis on the financial implications of relaxing frequency control as a mechanism.
http://www.energy.gov.za/files/policies/Electricity%20Regulations%20on%20Compulsory%20Norms%20and%20standards%20for%20reticulation%20services%2018Jul2008.pdf – DoE Regulations
http://www.enerweb.co.za/brochures/AMEU%20Conference%20-%20Enerweb%20VPS%20Paper%20-%20201109%20-%20%20V1.0.pdf – Demand Market Participation
http://www.systemoperator.co.nz/f3210,36010947/Appendix_A_-_A_Collation_of_International_Policies_for_Under_Frequency_Load_Shedding.pdf – Load Shedding in International operators

Flattr this!

Humans have always wanted light at home, from thousands of years of fire based light through to 18th century gas lighting, through to the early 19th century and 20th century with electric lighting.

In the 21st century, lighting is something we’ve taken for granted.
You come into a room, flick a switch, and you have light.

While the technologies have changed over the years with incandescents and neon through to modern LED based lighting, the user interface has remained the same.
There have been remarkably few changes over the years to the user interface – press a switch, and let there be light.

While there have been a few specific use case divergences – i.e. motion based or sound based activation (eg for security lighting, or public lighting in buildings), or marginal modifications to output (eg dimmers), those haven’t really changed the way we work, as the original interface is just so simple and succinct .

It has been something we’ve taken for granted, but what if you want more?
Timbuk3 claimed that the future is always brighter, so where are our sophisticated lighting solutions?

The answer is Smart Lighting.

The smart lighting space has been an interesting one to look at.
In some ways, its a solution looking for a problem – its cool, but its not something that most people really need. Currently the market appears to consist of sophisticated consumers – eg ones looking for automated solutions for their upmarket cinema / projector rooms or similar, through to geeks that want to play with fun new tech.
Your average home user isn’t likely to want to try it, as its still nowhere near as easy as the incumbent solution.

That said, its getting to the point where its worth taking a look at so one can tip toes in so to speak.

Currently there are 2 mass market implementations that are out there that provide additional smart features over and above the typical light on / light off provided by a switch.

Both systems do pretty much the same thing – they provide features over and above the normal set of functionality.

First the downside – they’re parasites. In order to be controllable remotely, they need to be permanently drawing power. Its minimal, but its still a current draw – green, these are not.

They also complicate lighting slightly – If you’re used to turning the lights off and on via a switch, you still can, but you’ll need to flick the switch off/on again to turn them on if they’re currently set to off via the app.

To sum up – you can still turn the lights on or off via the wall switch, but if you leave the switch on, then you can also control the lights via your computer or smartphone.

The app for both solutions has fairly similar functionality. You can change the color of the light output, from white light, through to yellow, red, and blue lighting. You can also dim the lighting.

Both of these solutions are similar –

Philips Hue, and YeeLink.

Philips has been attempting to sell their smart lighting system for a few years now – it hasn’t really set the world on fire, but it has been a slow if unsteady seller.

The Philips solution is based on a small ST Micro 32 bit processor running the base station, listening over ethernet, which then communicates to the bulbs, via Zigbee wireless (Ti2530 chipset).

Bulb plus base station looks like this:

Philips Hue System

Inside, the bulb has a number of LED’s that control the light output, and light colouring.

Philips LED Internals

The Philips solution is ok, but its not as open as it could be. They do have apps with functionality, but the main complaint is that features haven’t been added, and they’re buggy.

Their solution is here –
http://www.meethue.com/en-US

Onto the Chinese solution!

Yeelink has a fairly similar product that came out around the same time as the Philips one.
Theirs is Arduino based, and a lot more open.

They have a github site with code for the basestation (arduino based), which communicates over Zigbee (noticing a trend here!).

Take a look below at a demo of functionality

What makes the Yeelink different is that they have a better UI and a better API.
They’re also not just staying with lighting. They’re adding temperature sensors and other sensors that will be tied to the base station for future upgrades.

Future is future, and now is now. Lets take a look at what comes in the box – firstly, its impressive packaging for a Chinese company – it looks good, and is well packaged. Kudos for good design.

Box packaging –

211802jc00rtlge0lw0ygu.jpg.thumb

211803k6yijaxey44i5i1j.jpg.thumb

Base system with 3 bulbs. This retails at 600RMB/ $99

211804nhdjhnhh0h0ho77j.jpg.thumb

Base station

211806j3z89r6ymrgylp6e.jpg.thumb

iPhone app (comes in English and Chinese)

180416gxq5dqogdsdcx4gg.jpg.thumb

All in all, I think the Chinese one is better than the Philips one, purely for the openness – its fairly easy to integrate into other things.
Its still a toy vs a must have, but its a fun toy.

Would I buy one – yes.

Site:
http://www.yeelight.com

Taobao:

http://yeelink.taobao.com/search.htm?spm=a1z10.1.w5002-232341356.1.qYAuEW&search=y

Update –

Received mine, and semi-happy. Its still expensive, but it does work.

Downsides –
There is a slight latency on the color changes – maybe 300ms or so, but otherwise can’t complain.
It doesn’t turn on when you turn the light off / on (which I would have expected). You really have to turn on the light with the App or not at all.
Price!

Upsides –
Works.
Fun.

Will be happy when the next set of items comes out that can interface with it.

Still have to play a little with the API side – might link it to Zoneminder for initial testing similar to the eBuddy we use. Everyone had a go with the app and enjoyed
the iPhone side. Price needs to come down to RMB50 a bulb + R100 for controller before it feels like I’d want to get more of them. As an early adopter though, I’m fairly happy.

Some actual photos below:

Screen Shot 2013-05-21 at 6.31.47 PM

Screen Shot 2013-05-21 at 6.31.22 PM

Screen Shot 2013-05-21 at 6.31.10 PM

Screen Shot 2013-05-21 at 6.30.57 PM

Screen Shot 2013-05-21 at 6.30.48 PM

Screen Shot 2013-05-21 at 6.30.38 PM

Screen Shot 2013-05-21 at 6.30.25 PM

Screen Shot 2013-05-21 at 6.30.14 PM

Screen Shot 2013-05-21 at 6.29.57 PM

Archives

Categories

Tags

PHOTOSTREAM