Running behind the schedule since 1989

Sphinx Theming

leave a comment »

Getting Started

Customizing Sphinx visuals was somewhat upsetting to me at first, since the process is not really straightforward and the documentation is scarce. It could be a little bit user-friendlier. However, as soon as you get grip at basics, it gets pretty smooth and simple. You may have already tried copying over the default theme only to discover, that there is nothing particularly useful in there for an inquisitive scholar, since only one of the standard Sphinx themes is in fact editable and complete, the rest of them simply inherit its properties to add some minor alterations. This theme is called Basic and it’s a minimal sandbox template, the only theme that could be helpful for getting to the very bottom of Sphinx customization. Later you’d be able to inherit it and create a template, consisting only of alterations, but for a start it’s OK to copy the Basic in its entirety.

Hopefully, you’ve already created a folder for your Sphinx project and initiated it by issuing:

$ sphinx-quickstart

Or you may have an already existing Sphinx project, you want to theme — it’s up to you. In your project folder create a _themes directory and then rename and copy the theme folder there. Basic theme, perhaps, should be located in site_packages folder of your active Python install. On my Mac it’s: /Library/Python/2.7/site-packages/Sphinx-1.2b1-py2.7.egg/sphinx/themes/basic/, but Python on Mac OS X is just weird. If you’re using Linux or Windows, you should look it up yourself.

Next step would be to change of your project, accordingly. First, we should make sure, that the following line is uncommented and in fact correct:

html_theme_path = ['_themes']

Don’t forget to check the value of the html_theme parameter above:

html_theme = 'renamed_basic'
Basic Theme

Basic theme without customization looks like vanilla HTML with no stylesheet whatsoever.

As of now you may start making alterations to the theme. You will find that HTML files in there aren’t really HTML files, but templates with some staff automatically inserted on build. You can combine these automatic tags with basic HTML tags and as soon as you figure out how it works, you could move some of the interactive tags around, or get rid of some of them altogether. Don’t forget to check whether you’re not breaking anything though. Most visual aspects of Sphinx theme are modified through CSS files, which are located in the static folder. Main CSS file for Basic theme would be basic.css_t. Notice, that t in the extension brings to our attention the fact, that this is a template. Other than that it could be viewed and edited as a simple CSS file. For better understanding of HTML classes, returned by standard Sphinx directives you may test your site with the developer mode in any major browser. If you play with HTML and CSS templates for some time, you may want to add something more interactive. We could easily implement these kinds of things trough JavaScript or jQuerry (which is enabled by default in Sphinx). Using them, you may create spoilers, drawers, comments (Disqus?) or other interactive elements for your site. You may also notice more and more stuff lying around the theme folder as you experiment with Sphinx customization. Most of that is not even enabled by default, but could be incorporated into your theme. Me myself haven’t used even 10% of what could be achieved with Sphinx themes.

Example I: Spoiler

Let’s go trough a little example project, to recognize, what could be achieved with combined power of Sphinx and JavaScript. You may notice, that as for common elements in HTML themes there is usually a way to address them in JavaScript, it’s not that easy to introduce new classes and wrap some of the text into them. There is no custom div tag in Sphinx, unless you do that as .. raw directive which is not really a native way, and breaks lots of things. What I needed to do was to wrap some section of text into a div and make it collapsible on button pressed. You could achieve that by creating your own admonition and assigning it to certain HTML class. For example:

.. admonition:: Request
		:class: splr
 		Request example and parameters.

You can then easily reference this admonition by its class in your JavaScript. For example you could put this jQuerry script to your page.html template:

    <script type="text/javascript">
    $('.splr').css("background-color", '#F0F0F0');
    $('.splr').css('box-shadow', '0px 0px 1px 1px #000000');
    $('.splr > .last').hide();
    $('.splr > .expanded + .last').show('normal');
    $('.splr > p').click(function() {

If implemented correctly, this script should turn the splr admonition into a collapsible drawer. I’ve been actively using this, when documenting HTTP APIs, since it’s very helpful to hide request and response by default. Note, that you could also use this method for special CSS effects. Imagine, if aside from usual note, warning and tip you could have yellow, blue and purple boxes for whatever reason you can think of? Well, it couldn’t be any simpler. Admonitions are good default containers for some parts of your text, that should differ in design or function from the rest of the page.

Example II: Interactive TOC in Sidebar

Not the worst case, but still a valid comparison of TOC trees in Default theme and our JavaScript enhanced Basic. Note, that on the right the subcategories may be collapsed and opened dynamically.

Not the worst case, but still a valid comparison of TOC trees in Default theme and our JavaScript enhanced Basic.

You may have noticed, that Sphinx often adds TOC to sidebar automatically, if it’s not explicitly placed in the page itself. While this is certainly a very useful feature, sometimes things get out of control. I didn’t use the worst case in the picture on the right, but it could get up to innumerable 1st level sections, each of them could have a number of subsections and so on. Sometimes it is a clear sign, that the page should be reorganized into multiple standalone pages, united by a category, but it’s not always possible or needed. It’s perfectly alright to have a long and deep TOC in some cases and Default Sphinx theme is terrible in that regard.

You could use an updated version of the algorithm from the previous example to collapse some parts of the TOC by default. Note, how this script works with ul and li tags of the TOC tree list. Some stuff is applied recursively on the highest level of the list, some — on the subsequent levels. Especially it is well observed in different styles applied to different levels of the list, so that you could tell whether it is 1st, 2nd or even 3rd level title. Here is the full script:

    <script type="text/javascript">
    $('.sphinxsidebar').attr('position', 'fixed');
    $('.sphinxsidebar').css('position', 'fixed');
    $("h3:contains('Table Of Contents')").css('border-bottom', '1px');
    $("h3:contains('Table Of Contents')").text('TOC:');

    $('.sphinxsidebarwrapper li').css('background-color', '#B8B8B8');
    $('.sphinxsidebarwrapper li').css('box-shadow', '0px 0px 1px 1px #000000');
    $('.sphinxsidebarwrapper li').css('color', 'white');
    $('.sphinxsidebarwrapper li').css('border-color', '#000000');

    $('.sphinxsidebarwrapper li > ul > li').css('background', '#D0D0D0');
    $('.sphinxsidebarwrapper li > ul > li').css('box-shadow', '0px 0px 1px 1px #000000');
    $('.sphinxsidebarwrapper li > ul > li > ul > li').css('background', '#F0F0F0');

    $('.sphinxsidebarwrapper li > ul').hide();
    $('.sphinxsidebarwrapper li > .expanded + ul').show('normal');
    $('.sphinxsidebarwrapper li > a').click(function() {
    //hide everything
    //toggle next ul

Perhaps, it won’t look very well, but it is a very simplified version to illustrate the concept. If you get the idea, you may alter or add .css methods to achieve more plausible visuals. You could also work on lower title levels, but you will have to figure it all out yourself. In this version algorithm works only with three headings levels from <h1> to <h2>.

That’s it. I’ve compiled these examples as a full-featured theme project on GitHub. I’m going to polish it to some extent and perhaps, implement more interactive stuff over time. Feel free to contribute to this little project. If you come upon any issue with the examples above, or the theoretical rambling in the beginning — let me know. Sure, I’ve tested both scripts, but you never know. Also, you’re free to use any of these examples as a building block in your work with no attribution, since they’re rather generic and simple.

Written by wswld

September 2, 2013 at 7:11 pm

On dType Suspension

leave a comment »

I can safely confess, that a couple years ago I didn’t know a single thing about programming. I was aware of some fairly abstract concepts and had a basic understanding of how it all works, but it definitely wasn’t enough. My English teacher had a saying about the active vocabulary: “You may learn all the words from the dictionary by heart, but unless you use them regularly, you don’t really know them”. My situation with programming was somewhat similar to knowing lots of trivia, but no grasp of practical side. I was determined on fixing it as soon as possible. I’ve tried reading a book or two, but it never really got me going. Well, it explained a couple things here and there, but it was like learning things by heart — tedious and irrelevant. At that point one of my techie friends suggested me throw the book away and learn by immersion: make an objective, stumble upon problems, see docs and StackOverflow for possible solutions. That was the moment I started looking for the first project, fairly simple, yet more challenging than a mindless Hello World routine.

Once, I was typing down a big portion of plain text on my old slow Android phone in another Office Suite, with all those controls, sets of buttons on all sides of the screen and I wished there was something as Focus Writer on Ubuntu: basic, but fairly powerful in terms of achieving that special zen state. There wasn’t plenty of such projects in Android market back then (yeah, kids, it was called that in days of old) and this is how the idea of dType has stricken me. The concept was fairly simple: a minimalistic tool, that would let you jot down some text and then pass it to some other application (Evernote, Dropbox, Email, etc.) for saving or processing. It was simple enough to get grip at basics, yet quite challenging for a person, who haven’t seen Java code (or any code) before.

It was the moment, when I started coding. Well, let’s say it was more about googling intensively for just anything. It was hard. Most of the time I didn’t know, what happens and asked fairly inept questions on StackOverflow. I still do, but now at least I can tell, what most parts of my code are doing or supposed to do at the moment. First, the immersion is like trying to play piano blindfolded — my code probably stunk a big time, but at the end of the day it worked and it was encouraging. Interest in Android development helped me to get a job as a technical writer in a bunch of Android-related projects, especially OpenCV for Android. Since, I’ve been working mainly on C++ API references, I’ve started to delve into OOP concepts. I’ve been thoroughly explained, what is a class, a method and how they relate to each other, interfaces, abstract classes and the rest of this stuff. I’m extremely grateful for my mentor at there. Later, working on some other project, I had a chance to look closer at working Java code and see these concepts applied to Java. I immediately started to refactor dType code once again in attempt to implement thorough OOP design and shake off all the redundant stuff. My code became a little bit more laconic and neat. Not that it couldn’t get any better, but it was still a huge step forward for me.

As long as I remember, dType was constantly improving. It was first a bunch of undocumented spaghetti code, which was somewhat straightened out at version 0.16 — it became the earliest version I bother to keep in the repository history, since everything before that was a complete disaster. Perhaps, it’s still rather bad, but I’ve managed to shorten it almost twice, provide descriptive JavaDoc and fix a lot of issues while at it. I do feel a little attached to this code emotionally, since it is my first coding experience, that has grown into a little indie project. Over the course of two years it has provided me with innumerable challenges and priceless practical experience, but it’s finally time for me to move on. I’ve taken great interest in Python lately, and started a couple of projects in it. Coming back to Java code became more and more daunting to me. I was also advised by several programmers, that I’d better concentrate on getting really proficient with one language for now. My growing frustration with Java verbosity ensured, that I would end up with Python as my language of choice.

Still, it was a hard decision for me to drop dType completely. People do use it and clone it on GitHub. This project, though certainly quite niche and facile, does work for some. I decided, that this suspension is going to be more of a role shift for me: from active developer of this application to its maintainer. It will stay as an open repository at GitHub for you to clone and alter, it will stay published on Google Play. You can continue to use it in version 0.71. If people provide some relevant pull requests, I would be happy to merge them and even publish the resulting build as the new version of the app. It’s just that me myself don’t have the time or inclination for introducing new features anymore. It is now exactly the way I envisioned it, when I was starting. My big learning project has reached its objective. It’s finished. My priorities have changed, but if you do care, I would be glad to see your contributions. I’m not naive to think that it could become a huge open source project, but I do hope, that the app could continue living on its own, while I’m gone.

Written by wswld

August 22, 2013 at 11:56 am

Why Static Site Generators aren’t Good for My Blog

leave a comment »

I would be speaking about my own experiences and you may have noticed my in the title, which is there to remind you about the subjective nature of this article. Egocentric to the extreme. I don’t dare to speak about your blog or any other blog out there, but my own. I have no intention to convert you to my side, yet I would be very glad to see some of the like-minded people out there. I know they exist. My own research on the topic has unveiled, that although static site generators have this really substantial and zealous following, there are sober voices in the crowd, appealing to common sense. This is exactly why Kevin Dangoor went “From WordPress to Octopress and Back” or why Michael Rooney is “Migrating from Octopress to WordPress” — in the exact opposite direction from the majority of switchers.

Masses Migrating from WordPress to Octopress

Illustrating the distinctive trend among majority of switchers.

However, it doesn’t boil down to WordPress vs Octopress, as the issue in hands is much broader and may be represented as dynamic site engines vs static site generators. If you’re not aware of the difference between the two, here are the basics: with dynamic site your content is generated dynamically by an application running on the server side, static sites, on contrary, are pre-generated or written outright in HTML. Basically, dynamic sites are a web-applications that could change their behaviour instantly, depending on input and other factors, while static sites: well, they’re just HTML files passively sitting there and could be opened and read. Sure, with introduction of jQuerry, Java Script and HTML 5 to the mix, the difference gets a little less distinctive, but let’s stop at this level.

So, static vs dynamic. It could be virtually anything: Tumblr vs Hyde, Blogger vs Pelican, Movable Type vs Jekyll, etc. Major differences between the two models are more or less the same. It means that we should be really comparing the models themselves, not their instances.

So, what are the lucrative advantages of the static generation model? What makes people switch so quickly and without looking back, as they say? I came up with 3 most important reasons:

  1. Almost endless customizability.
  2. It’s mostly plain markup text files, that you can use with Git.
  3. Increased loading speed and security.

All three are valid points and at some moment I went down the static path myself with all three in mind. I went with Jekyll, then switched to Octopress. At some point I even tried to make Sphinx-generated site (sic!) work as a personal web-page, but lack of blog awareness and increasing complexity made me abandon this idea. When they say that running this kind of site is the easiest things to do, this is complete nonsense. In terms of comfortable workflow, I only can say a couple good things about pure Jekyll paired with GitHub Pages, but the result was so raw and required so much customization, that straightforward workflow was hardly an advantage. It is positioned as a toy for true geeks, tinkerers, but I don’t see how anyone could really benefit from this kind of tinkering. I work in IT and we are here mainly to solve problems, not create tons of complimentary issues. Instead of reinventing the wheel, you could as well invest your time into something, that really needs to be done, perhaps — writing.

I’m not the only one who noticed it, but for several months I’ve been experimenting with static generators, I’ve hardly written half a post. It was a common problem, as I googled it. People dug so deep into the endless customization and switching that eventually they’ve stopped writing. I may sound conservative to some, but I still think that blogging is mainly about content. Sure, to some extent, this problem is applicable to dynamic platforms too (ping pong between WordPress and Blogger, anyone?), but with static generators it grows to catastrophic proportions. They are the ultimate time drain for nerds and wannabes. Sure, if your time worth basically nothing and constant tweaking your blog is the best part about having it, be my guest. For me this kind of time misuse is inexcusable.

Another seeming advantage of static site generators is their reliance on simple text, that could be utilized in distributed version control systems like Git. Most switchers assume, that they already use plain text and Git for code, why not use it for a blog? The problem is, that it also complicates things instead of easing them. Each generator has its own super-easy-workflow with different special folders, commands and scripts — quickly it becomes a mess. In this regard Git is basically one more noose on your neck. I’d focus on one especially nasty implementation of Git in such workflow — Octopress. You should fork the original Octopress repo, the _deploy folder will be used for deploy to the pages repository and sources should be committed to the special source branch. Now imagine what would happen if a somewhat major update gets pushed to the upstream of Octopress and your copy has been significantly modified over time. As someone has put it: “Octopress is great, until it breaks”. If you have some extremal experience with Git and seen the Octopress workflow, you may imagine the hell it could possibly be. Actually, Octopress is itself a heavily modified version of Jenkins. No offence, but it all seems like Rupe Goldberg machine to me.

If you google it, you will find lots of people, performing full-featured benchmark tests of WordPress vs Octopress with a complete disregard for the principal differences between the two. People start speaking of security and speed benefits of static sites, forgetting about all the advantages of dynamic sites, that come at the price of increased complexity and bloat (yes, I do think WordPress is a tad bloated). Imagine the situation, that you need to jot down a post draft on a public or someone else’s PC? Will you be cloning the repo, installing ruby, Octopress — setting up all the environment to write a short post? What about mobile support? Should you attempt to clone Octopress to your mobile phone? What about preserving drafts in the cloud without publishing them, but having access to them virtually from anywhere and anytime? Can you really put a price on that? People start using Evernote or similar service for drafts and such, but does it really worth introducing another tool to your workflow? Does mobility and availability worth another couple seconds of load time? My own choice is comfort and efficiency. I want my blog to be complimentary to my technical endeavors, not the other way around.

I’ve started thinking that less is actually more long time ago and sometimes it may relate to blogging as almost any other area of our life. You may notice, that I don’t even use the standalone WordPress install, but the pretty limited hosted WordPress site. I prefer to pay engineers at WordPress $13 for domain mapping and settle for less choice in themes, plugins and other options to focus on writing. We’re all too lost in the world of different platform and workflow options these days. Google it and you will see hundreds of rants about why platform A is deliberately better, than platform B, why static sites are better, than dynamic. You almost never hear that they help you in writing, no. It’s all about SEO, storage space, customization, load speed and other insignificant stuff, not directly related to blogging. We’re too obsessed with form and seem to forget about crafting content. But, as I said before, it is all entirely subjective. You may still go down the static route, customize the ass out of your blog. You may even spend several years on writing your own static or dynamic blog engine from scratch, that will sure be absolutely unique and different from anything ever done before. Yet, I’m writing this post in a beautiful distraction-free WYSIWYG editor and my draft will be preserved online, when I press the Save button and no rake deploy is needed ever again.

Written by wswld

August 6, 2013 at 5:24 pm

BioAid Meets Life

leave a comment »

Couple of weeks ago me and my wife went to a hearing aid center for a free consultation before, perhaps, buying a new hearing aid. The previous one has been working just fine for five years or more and it has became unusable lately. In the center they have run all the required tests, created an audiogram and offered a couple variants to choose from. I was satisfied with the devices, however I haven’t recognized any major difference to the one, that I already had. I was hoping to manage with $600-$500, and I was shocked when they named their price: even the cheapest of the devices was around $1000. We could afford it, but I declined.

Being a person with inborn defect of the auditory nerve, I’m wearing a BTE hearing aid all my life since the early childhood and I still remember the day, when I put the thing on for the first time. For a boy, who haven’t been hearing the sounds of footsteps on asphalt or the birds twittering, etc, it was a marvelous discovery to hear all those sounds for the first time. Later in life, when I was wearing my fifth of sixth hearing aid this wonderful piece of technology was already taken for granted and I was actively using it in school, university and later — in the workplace. Gradually, I started to recognize quite a few shortcomings of modern hearing aids:

  1. Most doctors would suggest you wear aids on both ears, since it is really good at helping you to locate the sound and experience the stereo or 3D hearing. Wearing two devices may be considered tempting if you doesn’t do anything else with your ears like using a phone, headphones and participating in all the different kinds of intense activities (sport?), when people may unwillingly flick it off. It’s a physical inconvenience of having something plugged into your ear that is not that simple to take off, but paradoxically simple to drop. People using them day-to-day would understand, what I’m talking about. Headphones and phone would also require you to take the aid off first and don’t get me started on the horrible phone regime, which is available in most modern aids. This was actually how I lost one of my aids: I needed to use a phone, took it off and missed my pocket. Never saw that device again.
  2. Whistling. Yes, they are constantly whistling and it is a curse upon the people with limited hearing. They whistle even more as the plastic ear mold of the aid is wearing off, which means that ideally it should be replaced every year. Whistling is produced by the mic and the speaker in the ear mold since they are too close to each other and the feedback sound is being produced. Ideally the ear mold should hermetically fit in the ear, to avoid the feedback, but at times it sticks out anyway, and when it does — it’s unbearable.
  3. Close sourced software and hardware parts. This industry is controlled by a bunch of electronics industry giants (Siemens, Phonak, etc) and they became to some extent monopolistic in this market, since only they had the initial resources to support research and production of hearing aids. Of course they are laying out all the rules now, which leads us to the fourth and the worst limitation of all.
  4. Price. These devices are pricey as hell. It’s a mic, a little processor and a speaker. Yeah, the size is super-small, but it doesn’t add up to $1500-$2000 in my head, sorry. It’s just immensely overpriced. I’m not a cheap guy, but I do have a problem when people feed me up with “magic” and as I work in technology, I know that such claims are almost 100% marketing and outright bullshit. They know that most of us — the hearing impaired don’t have a choice and we’re forced to pay twice as much money for anything they come up with. If you look for the last ten years — they are going round in circles. Hearing aids haven’t seen any revolutionary improvement for decades, compared with the booming technology market in consumer electronics and it’s just sad.

Considering all the above, there is great demand for some open solution in the market. Something you could thinker with yourself and use as a temporary or permanent substitution for a commercial hearing aid. To achieve that, it should be capable not only of recording, amplifying and reproducing the sound, but it also should be smart enough to amplify only some frequency ranges, depending on type and severity of the hearing damage. It would need some computational power to process the sound. Ideally, it should also analyze the sound and get rid of background noise, while normalizing the rest of it (making it quieter or louder depending on the context). Modern smartphones are perfect candidates, since they have everything we need in a hearing aid. I started looking for solutions available as an iPhone app and stumbled upon BioAid.

BioAid is an app, implementing a full-featured hearing compensation algorithm, developed by a team of scientists in the university of Essex. They themselves stress on the fact, that this is not about an iOS app, but the algorithm at the heart of it, which took years of research and continues to evolve today.

Initially, the research was not concerned with hearing aids at all but with the construction of computer models of how hearing works at a physiological level in the auditory periphery.

However, the team has moved to working on hardware models and opted for mobile phones, since commercial hearing aids are almost impossible or too expensive to modify and require an agreement with the manufacturers, which is not that easy to obtain. Smartphones have everything, that a hearing aid needs (micprocessorspeaker), they’re compact enough and modern smartphones have sufficiently long battery life to perform on-the-fly sound processing almost all day. In my case it was a godsend and I rushed to test the app in everyday situations.

First thing I needed to do was to find the most suitable mode. For me it was simple since I’ve done million audiograms and knew that my hearing lacks some of the higher frequencies. After a quick scan I have found Gradual HF — the one I recognized at once as it reminded me of how all my aids sounded. My advice would be to start your scan with the first variants of every mode since some of the modes may be too loud or high in frequency and it’s just unpleasant to learn it the worst way. Surprisingly, finding the right mode is not a problem at all. I was afraid the app would require audiograms and it would complicate things. It’s definitely easier this way. Depending on the headphones (they have different levels and may alter the sound a little bit) I was best off with the 2nd and 3rd variants of Gradual HF mode.

I started testing the app in a park with lots of people walking, rolling and skating around. Although it was quite a test to start with I was pretty impressed with the results. It reminded me of times, when I put my first aid on. I heard everything happening around quite distinctly. Frequencies were altered in the right way. Sure iPhone headphones mic has its problems and I’m still hoping to find a better one, but other than that, I had no problems at all. It does reduce a little bit of background noise, depending on Gate value, however I wouldn’t recommend setting it much higher than default settings as it may cripple the other, more critical sounds. The problem with the standard Apple headphones mic boils down to missing out on sounds from behind or on the left (if you have the mic on your right side) occasionally, but it’s not critical. However, if you’re speaking with someone and the person is on the left — it may work a little less precisely than usually. The mic is also quite sensitive to wind and clothing rustle. Due to some lag you can’t use Bluetooth headphones, though. This is an iPhone issue since people watching videos with Bluetooth headphones sometimes notice that too.

Usually, I wear my hearing aid in office, since it is the only place, where most of conversations are critical and may happen almost spontaneously. I was also quite satisfied. I heard everything said on meetings, even better, than with my previous aid, and decided to use BioAid at least temporarily at work. The only problem I can imagine is people’s perception of wearing headphones all the time. Some people assume that you’re listening to music. My office is quite liberal and modern since it’s an IT company and a little Skype chat announcement worked, I can imagine, however, it couldn’t always work this well. Me personally, I find headphones more aesthetically tolerable than a BTE aid, since people are almost constantly wearing headphones nowadays. Another problem is that you may need to buy a battery extender case or enhance the battery life of your iPhone in some other way. My battery is just enough to live through an usual working day, avoiding, if possible other uses of the phone. I only listen to some occasional music on my commute in the morning and after work. If I took the phone off charger at 8 in the morning it is usually almost dead by 22 after the full 8-hour day. Battery life is my biggest concern with the app so far. I had thought of getting a separate iPod Touch player and run the app there as professor Ray Meddis does in this video.

Other minor flaw is that algorithm is implemented in mono, though in theory, stereo implementation is also possible. It is a problem since it may affect your perception of the direction, the sound is coming from. Even if it was processed as stereo, the iPhone standard headphones mic is mono, so the sound is mono by default. Perhaps, stereo would be even worse as a battery drain, so maybe it’s OK the way it is. It is specific to iPhone implementation only and not the algorithm itself. Speaking about iPhone implementation, there are also some minor issues that complicate the workflow: like the app stopping on phone call and not resuming afterwards, or welcome screen appearing every time the app is launched, but all of these are solvable.

Still, for now I’m not even thinking of getting back to commercial aids. I have a very strong impression, that BioAid approach is the future of hearing aids. Especially for people, who don’t have the hearing damage so severe, that they require deep in-canal aids or even implants, which is majority of people with hearing problems. Unloading the sound processing from the aid to a smartphone or similar device (iPod Touch?) may be the right way especially considering the fact, that going from nine to twelve channels adds up at least a thousand dollars to the price of the commercial aid and iPhone has enough computational power to process much more. Sure, there are still some problems, but most of them are of the implementation and are going to be fixed sooner or later. Algorithm itself is entirely open-source, which means you can fork it on GitHub and create your own version, addressing all of the issues described above, or providing support for some other platform. If you’re a hearing impaired person and you’ve decided to try BioAid for yourself, don’t forget to provide your feedback to the research group, since it may turn out very useful to them.

Written by wswld

July 20, 2013 at 1:10 pm

Google Apps and Household Finances

leave a comment »

I’m in charge of household finances. By this I imply recording every single money transaction, occurring within our family, and providing on demand information about balance by account, overall household value, etc. I also issue monthly financial reports, so that me and my wife could analyze them and come up with a better spending strategy. When the whole thing works, it provides some outstanding insights into the economic condition of our household. When it works.

Let’s assume you want to try it for yourself. Casual Google search will expose you to the vast amount of options, some of them are paid, some — free as in beer. But I dare you wouldn’t be completely satisfied with any of those apps and services:

  1. Some of the superb options out there are probably unavailable in your country, if you live outside of US. For example, the famous Mint app is unavailable in my country without a good VPN.
  2. Some of the projects don’t provide any mobile app or provide a 3rd party solution, which will have all sorts of quirks and unsupported features.
  3. Only a small subset of projects gets it right. Experience is always to some extent limiting even with the paid applications. I’ve never seen a personal finance app that would provide all the features I need, and I’m not among the most demanding users out there, believe me.
  4. You usually have no access to data except for export. If your data is somehow lost or ruined, it all happens under-the-hood. You won’t be able to fix the thing yourself, unless you’re a developer and the app is open source.
  5. Platforms may be also the thing. Some developers provide only iOS and Mac binding and other try to please everyone except Apple users. I’m using Mac and iPhone, and my wife has been a die hard Android zealot (just kidding). Since she couldn’t access our database herself, she notified me about every transaction and I placed it into the system on her behalf. Can you imagine how exhausted I was after several months of this workflow? It was all because the app developer didn’t provide any sane way of syncing between Android and iOS. Moreover, when my wife has switched to the iPhone, we discovered that they don’t even provide syncing between iOS devices.

I’m speaking now about Money by Jumsoft. Not only it never really implemented syncing between different mobile devices, but it also failed to provide simple Mac to iPhone sync, the feature that is actually listed on their site. It used to sync through iCloud but when all the drama with iCloud not supporting databases occurred, they went back to Wi-Fi syncing. It never worked right and ultimately it crippled our data. Five months of carefully collected entries for every single transaction gone in a second. It was the moment I started to look for some other approach.

New approach came as an idea of using something simple and omnipresent. Something that would be available for all of the popular mobile and desktop platforms out there. I was thinking of using Google Docs. First, this idea seemed a little bit crazy, but then as it developed into a working prototype, it was actually a very smart move. Let’s break the concept down in theory:

  • We need some kind of interface for creating entries.
  • We need a database for storing entries.
  • We need some kind of a script to process the results.

I looked closely at all the products provided by Google, and found all three components for implementing this concept:

  • Interface is going to be implemented as Google Forms document.
  • Database is best implemented as Google Spreadsheet. Jumping ahead of myself, I can also reveal that the forms may write responses to the spreadsheets. That’s exactly what we need!
  • Script was a little bit harder to figure out. First, I was hoping to process everything with built-in spreadsheet functions, but it never really worked for this kind of calculations. So I went with Google Apps Script. More of that in a bit.

It all look quite simple in theory, but in reality there are quite a few pitfalls here and there. I’m going to guide you trough all the major steps of implementing this concept in practice.

Creating the form and collecting responses is not actually that hard. In Google Drive create the form document and populate it with elements. However, there are a couple of minor considerations which may affect your output data to some extent:

  1. Watch the question titles as they directly affect the number of columns and their caps. Section caption has no real value regarding the output data though.
  2. Questions do not override. So, if you have question called Amount in one section and a question with the same name in another, it will result in two columns Amount in your table with different values.
  3. Also you may want to avoid nested sections as they complicate things a little.

Here is an example of how you should not organize your form:

Section: Start
Element: Type [Bills, Food]

* Depending on type value go to one of the following:

- Section: Bills
Element: Subtype [Electricity, Cellular, Rent]
Amount: [Number]

* Commit results.

- Section: Food
Element: Subtype [Grocery, Fast Food]
Amount: [Number]

* Commit results.

Above you can observe an attempt to override the Subtype and Amount elements. It will result in a table with duplicate Subtype and Amount columns. It’s not that smart but it is the way it is. What I did is getting rid of subtypes and creating only the number of types I would certainly need. For example I have no Bills in types, but Electricity, Cellular, etc. So, I ended up with only one section like this:

Section: Start
Element: Type [Electricity, Cellular, Rent, Grocery, Fast Food]
Amount: [Number]
* Commit results.

I was geared towards creating the system as simple as possible, so I tried to exclude all the nice but potentially useless stuff leaving only the core functionality, that would be harder to break. In menu go ResponsesChoose response destination. It will show a dialog window allowing you to choose a spreadsheet for your output data. Quite easy. If you test the form now you can see how the responses are being added to the table in your Google Drive. It even creates the human-readable timestamp column, which saves you the trouble of inputing the date and time yourself. Note, that live form may be accessed as a bookmark, or opened in mobile Google Drive apps for Android and iOS.

Creating the form and gathering the data aren’t exactly the trickiest stages of our little project. Providing somehow valuable analytics on top of that data – that’s the real challenge. Let’s imagine, that we’ve collected all the data and now we want to analyze it. We’ll need some automatically updated metrics for our project:

  1. Household Balance
  2. Balance by Account

As soon as we figure out the algorithm for these two, we can easily implement any other metric (Balance by Category?) using the same method. Note that built-in spreadsheet functions probably won’t work, we need something much more extensive and smart. Here comes the Google Apps Script. It’s basically a full-featured scripting API for simple JavaScript web apps. Google provides the server and the ability to bind the script with the variety of Google Products. If you haven’t heard of that before, there are lots of examples and learning materials on their site — believe me, there is a lot of magic going on over there.

Let’s see how we can apply the scripting capabilities of Google Apps to our case. Open your destination spreadsheet and in menu go: ToolsScript Editor, in the Google Apps Script dialog window select Spreadsheet We will need a little script, that should run, when the spreadsheet is opened. Example script will leave you with onOpen and readRows functions. You can pretty much start with that. Let’s see my take on the latter:

  1. In the first part we start with SpreadsheetApp.getActiveSpreadsheet() and end with row values as a 2-dimensional array values. Please keep in mind, that the array is 2-dimensional.
        var ss = SpreadsheetApp.getActiveSpreadsheet();
        var sheet = ss.getSheetByName("Form Responses");
        var rows = sheet.getDataRange();
        var numRows = rows.getNumRows();
        var values = rows.getValues();
  2. Now we should declare all the variables we will need. In my case it was the output array and a separate variable for every account within family, including the cash accounts:
        var arr = [];
        var hhld = 0.0;
        var vcsh = 0.0;
        var c4545 = 0.0;

    Note that hhld here stands for the total household value, vcsh for Victor’s cash, and c4545 — a fictional credit card by its last four digits.

  3. Let’s iterate trough every row and match the entry with the account accordingly:
          for (var i = 0; i <= numRows - 1; i++) {
              var row = values[i];
              if (row[1]=='Victor Cash')
                  vcsh = vcsh+parseFloat(row[3]);
                  hhld = hhld+parseFloat(row[3]);
              if (row[1]=='4545')
                  c4545 = c4545+parseFloat(row[3]);
                  hhld = hhld+parseFloat(row[3]);
  4. Now we may append the values to the array and return it:
          return arr;

Let’s get to the main onOpen() function. It is shorter, but a little more tricky:

  1. We open an active spreadsheet again, but this time use the other sheet, since no one would ever consider mixing data and results a good idea. We also should create two arrays.
          var ss = SpreadsheetApp.getActiveSpreadsheet();
          var targetSheet = ss.getSheetByName("Analytics");
          var array = [];
          var data = [];
  2. Now let’s call the readRows function and get an array of account balances. Note how we use data[0] to create a two-dimensional array.
          array = readRows();
  3. Finally, we should get the sheet range and assign data to it. Note that setValues can work only with 2-dimensional arrays and it was the reason we created one in the first place:
          var range = targetSheet.getRange('A2:M2');

You can test your app by both opening the spreadsheet or calling the onOpen function directly by clicking ToolsScript ManagerRun. For now the script doesn’t really work with mobile devices, so you will need to open the sheet on your machine to update the metrics. There are innumerable ways you may improve this script. Please let me know if you come up with something cool.

Update 18.06.2013: I’ve found a way to automate the account counters and therefore enable full support for mobile devices. If you follow the workflow described in the post, you would end up with the script running only when opened, however for a spreadsheet paired with a form there is another kind of trigger available: on form sent. It runs the script every time, when the form is sent and unlike onOpen it seem to be performed on the side of Google, which makes our script platform-agnostic. Trigger can be enabled by going to ResourcesCurrent project's triggers. In the dialog window add a new trigger and then select your main function (onOpen), next — From spreadsheet (yes, they have time-driven triggers too) and On form submit. You may test it now by filling the form from your phone and then checking the account counters in a second or so.

Written by wswld

June 21, 2013 at 1:41 am


Get every new post delivered to your Inbox.

Join 41 other followers