wswld

Running behind the schedule since 1989

Humble Collection of Python Sphinx Gotchas: Part II

leave a comment »

Gotcha 1: Release and Version

Sphinx makes a distinction between the release code and the version of the application. The idea is that it should look this way:

version = "4.0.4"
release = "4.0.4.rc.1"

Most project use a much simpler versioning convention, so they would probably do something like this:

version = "4.0.4"
release = "4.0.4"

I’d been doing this myself for some time, until I realized the conf.py file is a simple Python file (no shit!) and it is perfectly fine to do something like this:

version = "4.0.4"
release = version

Yeah, kinda obvious I know. I however missed this (though I’ve been putting much more complex code to conf.py) and some people did either. So, I’ll just leave it here.

Gotcha 2: Download as PDF

Sometimes a PDF download should be provided along with the hosted HTML version. It looks good and people can get a well-formatted file to use locally as they are trying to work with your API or product. In short: it could be done easily, using Python. I’m really pissed off at people, who know Python or Bash and they still keep asking, whether there is an automatic way of doing that. Well, it doesn’t get more automatic than that:

# generating latex and pdf
make latexpdf

# generating html
find 'index.rst' -print -exec sed -i.bak "s/.. &//g" {} \;
find 'index.rst' -print -exec sed -i.bak "s/TARGET/$UPTARGET/g" {} \;

VERSION="$(grep -F -m 1 'version = ' conf.py)";
VERSION="${VERSION#*\'}";
VERSION="${VERSION%\'*}"

sphinx-build -b html . ../$TARGET/$VERSION/
cp latex/$UPTARGET.pdf ../$TARGET/$VERSION/

Note, that there should be the following line, inserted into the index.rst in the example above:

.. &  `Download as PDF <TARGET.pdf>`_

Where .. are comment structure, & is here to distinguish the line from usual comments and TARGET is replaced with $UPTARGET which is the upper case version of the project name and the default name of the .tex and .pdf files. It creates a relative link to the .pdf file, which is then copied to the exact same folder, where HTML output is located. I’m not going explain much about the variables, as their sources may differ. In my work I use a python script, with exact same principle (I figured bash example would be more universal) and it gets values of $TARGET, $UPTARGET and $VERSION from a JSON file with a list of targets (more on that in the next example). In the example above, I’m stripping values off the conf.py file. In fact you can use whatever input you wish, even pass the values as arguments. What I was trying to illustrate is the concept itself.

Gotcha 3: Using Scripting to Organize the Sphinx Project as a Multiple Project Knowledge Base

Some of the companies, I’ve been working at had this huge array of active projects, that they wanted to present as a single site, or the whole variety of sites with the same theme, or the single site with PDF version for every first level subsection. Basically they wanted me to create a Sphinx-based knowledge base. Using a simple Python or Bash script there are ways to organize your project any way you want (we’ll use Python this time as it’s closer to what I’ve been using). We’re going to create a site that automatically builds PDF version for each first level subsection (project) and puts it alongside the subsection’s index.html. Basically this is a bit more complex variant of the previous example.

Let’s imagine we have a single Sphinx project with a couple first level sections corresponding to company’s projects, for example: Foo and Bar (give me that medal for originality, yeah). Basically, your folder structure will look like this:

Acme
|  index.rst
|_ Foo
|  |_1.0.0
|    |_index.rst
|  |_1.1.0
|    |_index.rst
|
|_ Bar
   |_1.0.1
     |_index.rst

Yeah, we also have versions. I use the following script for the projects with such layout:

#!/usr/bin/python
# -*- coding: utf-8 -*-

import os
import errno
import json
import argparse
import datetime

parser = argparse.ArgumentParser(description='Builds the documentation project.')
parser.add_argument(
    '--test', '-t',
    dest='test',
    action='store_true',
    help="Build the test version (sends output to the test server)."
)
parser.add_argument(
    '--local', '-l',
    dest='local',
    action='store_true',
    help="Build the local version (doesn't send output anywhere)."
)
parser.add_argument(
    '--no-pdf', '-p',
    dest='nopdf',
    action='store_true',
    help="Don't build PDFs (to save time, when debugging HTML)."
)
parser.add_argument(
    '--verbose', '-v',
    dest='verbose',
    action='store_true',
    help="Write output to log or to screen."
)
args = parser.parse_args()

# sets the default for what to do with log output if not verbose
log = {
'html': ' > ../html.log'
}
if args.verbose:
    log['html'] = ''

def mkdir(path):
    """
    The function to make directories.
    """
    try:
        os.makedirs(path)
    except OSError as exc:
        if exc.errno == errno.EEXIST and os.path.isdir(path):
            pass
        else:
            raise

def sh(script):
    """
    Simple wrapper for bash
    """
    os.system("bash -c '%s'" % script)


if __name__ == "__main__":

    # finds the project folder and cd into that
    pwd = os.path.abspath(os.path.dirname(__file__))
    os.chdir(pwd)

    # opens targets.json and forms the list of targets
    json_data = open("targets.json").read()
    trgt_list = json.loads(json_data)

    # adds _tmp folder
    if os.path.exists(os.path.dirname("_tmp/")):
        sh('rm -r _tmp/')
        print "-- _tmp/"
    mkdir('_tmp/')
    print "++ _tmp/"

    # adds _metatmp folder
    if os.path.exists(os.path.dirname("_metatmp/")):
        sh('rm -r _metatmp/')
        print "-- _metatmp/"
    mkdir('_metatmp/')
    print "++ _metatmp/"

    # adds _pdf folder
    if os.path.exists(os.path.dirname("_pdf/")):
        sh('rm -r _pdf/')
        print "-- _pdf/"
    mkdir('_pdf')
    print "++ _pdf/"

    # copies everything into _metatmp
    sh('rsync -r --exclude _metatmp/ * _metatmp/')

    os.chdir('_tmp')
    print ">> _tmp"

    if args.nopdf != True:

        for trgt in trgt_list:
            uptrgt = trgt_list[trgt].encode('utf-8')
            # forms the list of versions based on project subdirectories
            vrsn_list = os.walk('../%s/' % trgt).next()[1]
            #builds PDFs for every version in version list
            for vrsn in vrsn_list:
                sh('rm -rf *')

                # copies all the necessary files including temp.py as conf.py
                sh('cp -r ../%s/%s/* .' % (trgt, vrsn))
                sh('cp ../temp.py conf.py')
                sh('cp ../pdf_logo.png .')
                sh('cp ../Makefile .')

                # sets target names and version in conf file for each subproject
                sh("find \"conf.py\" -print -exec sed -i'' \"s#&TRGT#%s#g\" {} \; >/dev/null" % trgt)
                sh("find \"conf.py\" -print -exec sed -i'' \"s#&UPTRGT#%s#g\" {} \; >/dev/null" % uptrgt)
                sh("find \"conf.py\" -print -exec sed -i'' \"s#&VRSN#%s#g\" {} \; >/dev/null" % vrsn)

                if args.verbose:
                    log[trgt + vrsn] = ''
                else:
                    log[trgt + vrsn] = ' > %s%s_pdf.log' % (trgt, vrsn)

                print("\033[36m")
                sh(
                    'make latexpdf %s && echo "\033[1;32mPRODUCED: %s%s.pdf\033[0m" || '
                    'echo "\033[1;31mNOT PRODUCED: %s%s.pdf\033[0m"' % (log[trgt + vrsn], trgt, vrsn, trgt, vrsn))
                print("\033[0m")

                # adding links to PDF to the subsection index.rst
                sh("find '../_metatmp/%s/%s/index.rst' -print -exec sed -i'' \"s/.. &//g\" {} \; "
                   ">/dev/null" % (trgt, vrsn))
                sh("find '../_metatmp/%s/%s/index.rst' -print -exec sed -i'' \"s/TARGET/%s%s/g\" {} \; "
                   ">/dev/null" % (trgt, vrsn, trgt, vrsn))
                # copies the PDF file to the _pdf folder for temporary storage
                sh('cp _build/latex/%s.pdf ../_pdf/%s%s.pdf' % (trgt, trgt, vrsn))

    os.chdir('../_metatmp')
    print("\033[35m")
    sh('make html %s && echo "\033[1;32mPRODUCED:HTML\033[0m" || echo "\033[1;31mNOT PRODUCED: HTML\033[0m"' % log[
        'html'])
    print("\033[0m")
    if args.nopdf != True:

        for trgt in trgt_list:
            vrsn_list = os.walk('%s/' % trgt).next()[1]
            for vrsn in vrsn_list:
                # copies the produced PDFs from the _pdf folder to the subsection root
                sh('cp ../_pdf/%s%s.pdf _build/html/%s/%s/' % (trgt, vrsn, trgt, vrsn))

    os.chdir('..')

    if args.test:
        sh('echo "TEST VERSION\nPRODUCED:%s" > _metatmp/_build/html/VRSN' % datetime.datetime.now().strftime(
            '%Y-%m-%d %H:%M:%S'))
        sh("cp -r _metatmp/_build/html/* /var/www/html/")
    elif args.local:
        pass
    else:
        sh("scp -i ../id_rsa -r _metatmp/_build/html/* serveruser@server:/server/path/")

    sh('rm -r _tmp/')
    sh('rm -r _pdf/')

Don’t worry, it only looks kinda big. The script is rather simple. Also, I’ve commented the hell out of it so that you could figure it all out. Note, that you also need to create a targets.json file in the root of your project, containing the following lines (assuming we’re using the structure we agreed on in the beginning):

{
  "foo" : "Foo Foo",
  "bar" : "Barrington"
}

The file will tell the script of full project names and how they correspond to target names (folder names) in the structure. Also you will need to have a temp.py file containing only the info we need for PDF building with most of the target names and version numbers represented as variables for injection (yeah, I know this is hacky, but did’t want to bother with imports, dependencies etc). First of all it should have $VRSN tags:

# The short X.Y version.
version = '&VRSN'
# The full version, including alpha/beta/rc tags.
release = '&VRSN'

It should also have tags in the LaTeX part of the settings:

latex_documents = [
  ('index', '&TRGT.tex', u'&UPTRGT',
   u'ACME', 'manual'),
]

Other than that temp.py may resemble your usual conf.py. The reason is that we use conf.py for HTML and it has preset version and project name values for the project as the whole. So we better distinguish between the file for injections and the main configuration file, so that they don’t mess with each other. Note that if you’ll need to add some additional parameters or a preamble to the LaTeX output, you should do that in temp.py as conf.py is not used for building PDFs at all.

If we prepare the project this way, the script should build PDF’s for every subproject and put them to the subproject’s HTML root. Ideally the HTML version could also be built separately for every subproject (for the right project name/version to appear for every subproject). This script is more of a proof of concept rather than out-of-the-box solution. However if you now understand Sphinx’s capability to extension and automation, you may create projects of any complexity yourself.

Written by wswld

April 3, 2015 at 2:26 pm

Humble Collection of i3wm Lifehacks: Part I

leave a comment »

Lifehack 1: Locking the Screen

This is kinda a nobrainer, but me myself sometimes look for a place, where to copy some of the syntax (I’m lazy and don’t always keep that in my head), so let’s start with this one. i3wm ships with beautiful and robust screen locker i3lock, wchich can be launched like that:

i3lock -c 000000

It will lock the screen with black overlay. The problem is, that you wouldn’t be typing this command every time you want to lock the screen. We need to add a shortcut to i3 config file:

bindsym $mod+Shift+Tab exec "i3lock -c 000000"

Now when we press the combination of mod-button (Win in my case) and Shift+Tab – our screen gets locked.

Lifehack 2: Activating/Disactivating the Second Screen

If you use i3wm on daily basis, you probably know, that the second screen is not turned on automatically. You should manage displays manually with xrandr command. If we run this command without attributes, we’re gonna get something like this:

Screen 0: minimum 320 x 200, current 3200 x 1080, maximum 8192 x 8192
LVDS1 connected 1280x800+1920+0 (normal left inverted right x axis y axis) 261mm x 163mm
   1280x800      60.02*+  50.05  
   1024x768      60.00  
   800x600       60.32    56.25  
   640x480       59.94  
VGA1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 521mm x 293mm
   1920x1080     60.00*+
   1680x1050     59.95  
   1280x1024     75.02    60.02  
   1440x900      59.89  
   1280x960      60.00  
   1280x720      59.97  
   1024x768      75.08    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   640x480       75.00    72.81    66.67    60.00  
   720x400       70.08  
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)

On some rare occasion writing something like this would not be a problem:

xrandr --output VGA1 --right-of LVDS1 --auto

Well, I know you’re probably well aware of how xrandr works. Just in case.

However, it may get pretty redundant if you use second screen on daily basis and regularly unplug it from your laptop. The best way to go would be to add script shortcut to your /usr/bin/ or /bin/ directory. Run the following lines:

printf '#!/bin/bash\n\nxrandr --output VGA1 --right-of LVDS1 --auto' > /usr/bin/screenswitch
chmod 755 /usr/bin/screenswitch

We can use screenswitch command which doesn’t make it much easier. What would certainly help us is a key shortcut, so let’s add a line similar to one in the previous hack to our i3 configuration:

bindsym XF86Display exec "screenswitch"

Now when you press on your special key combo (Fn+F7 on my ThinkPad), you enable/disable the second screen. Try some other key combination if you have no special display button. Of course the script itself is pretty basic and it would work only if your screen works well in auto and you use the same second screen daily. However, there are more complex scripts available all over the web (example).

Lifehack 3: Locking the Screen on Wake

If you use pm-utils with your i3wm setup, you’ve probably noticed that the screen is not locked, when the laptop is awakened after suspend or hibernate. It’s very insecure. Let’s try to fix it. Create file cat /etc/pm/sleep.d/91blocker and add the following lines to it:

#!/bin/sh
case "$1" in
        thaw|resume)
                su youruser -c '/usr/bin/i3lock -c 000000'
                ;;
        *) exit $NA
                ;;
esac

Don’t forget to change youruser to your username. Now let’s make sure we have all the right permissions:

chmod 755 /etc/pm/sleep.d/91blocker

Now, if we run pm-suspend or pm-hibernate our screen is going to be locked on wake. This script has one shortcoming though: it doesn’t lock the screen instantly, so you may see stuff for a couple of seconds before it gets locked. If it is not a critical issue to you, feel free to use it, othrewise you may need to work on it or find a different solution altogether. If you have any ideas how to improve it, let me know.

Written by wswld

March 21, 2015 at 11:33 pm

Posted in Uncategorized

Tagged with , , , , ,

Short History of my Relationship with Lenovo ThinkPad

leave a comment »

My own ThinkPad X201.

My own ThinkPad X201 workhorse

For those, who have limited time, this post comes down to the following statement: Lenovo Thinkpad X201 is the best X-series Thinkpad created yet (although after a somewhat heated discussion at Reddit, x220 looks better). What follows is my attempt at proving this point with my merely anecdotal evidence. I’m funny like that. Here comes a short story of my relationship with Lenovo Thinkpad X series.

In 2013 I’ve been working as a technical writer (more technical than a writer actually) in a medium-sized web-slash-mobile startup and the Macbook, they’d given me, failed and I decided to try something new. At that time I got increasingly interested in Lenovo Thinkpad (yeah, it hadn’t been IBM for quite a long time already). A couple of my colleagues had these X220 machines and they seemed pretty solid and professional, especially with all kinds of Linux installed on them (I worked with a bunch of Python devs and everybody used their favorite flavor of Linux). My transition from Macbook to Thinkpad was also dictated by how Macbook wouldn’t let me use i3wm (which I was completely sold on at the time) as the main WM. So I went to my manager an he approved my order. The problem was I wasn’t really familiar with modern ThinkPads then and ordered 14″ model (figured I could use all the extra screen space). I figured any Thinkpad will do. It was my mistake.

I got T431S, which was admittedly quite expensive at the time, but didn’t look like Thinkpad at all. If anything it resembled plastic version of Macbook. It had a rather disguising chiclet (island-type) keyboard, no LED indicators, thinner body and as a result much less ports (although for the record I do understand S stands for slim). The only thing it had in common with the previous generations of Thinkpads was the clit, which was kinda useless without the additional row of buttons, the device actually had no touchpad buttons at all as it mimicked the Macbook-style platform touchpad (awful, awful trend, actually). The hardware was of questionable quality and it gave me lots of headache on Linux (especially WiFi) despite the ThinkPads traditionally being considered one of the best laptops, when it comes to compatibility with Linux. I worked on this machine until the company went under, and got used to it somehow, but it never lived up to the image of Thinkpad I had in my head.

Even after that I didn’t give up on the Thinkpad series completely, though it clearly went downhill with every subsequent model. My wife got herself X230 at work and as I got to play with the device a bit, I had an impression that this is not as bad as 431S, so as the line moved forward I decided to go in the opposite direction. At that time I started working in a medium enterprise infosec company and they had Thinkpads all over the place, and most of these were the Thinkpads as I expected them to be from the day one. These were X201 models. They aren’t as outdated as the earlier ones but they have all the right features. Here is a short comparison between some of the latest X series models:

X201 X220 X230 X240
LED Indicators 9 on the front and 3 are mirrored on the back. 3 on the front and 2 on the back. 2 on the front and 2 on the back. None (!)
Keyboard Classic Classic Chiclet Chiclet
Ports VGA, Ethernet, 3 USB, separate ports for mic and headphones, phone line port, ECSC slot. VGA, Mini DisplayPort, 3 USB (1 USB 3.0), combo audio jack, media card reader slot, ECSC slot. VGA, Mini DisplayPort, 3 USB (2 USB 3.0), combo audio jack, media card reader slot, ECSC slot. VGA, Mini DisplayPort, 2 USB 3.0, combo audio jack, media card reader slot, ECSC slot.
ThinkLight (Keyboard Flashlight) Yes Yes Yes The keyboard is backlit instead. Get your tongue out of Apple’s ass, Lenovo!
Clit Buttons Yes Yes Yes Touchpad is a platform with areas for clit buttons, which is kinda sad.*

* – to be fair Thinkpad X250 actually went back to having hardware buttons, so X240 is not the whole new tendency, but rather disappointing stumble.

So, to sum it up for X201:

  • There is the right number of LED indicators (X220 and X230 have less and X240 seem to have none whatsoever) and they are mirrored on the back of the machine, which is convenient, when the lid is closed.
  • The classic Thinkpad keyboard is just right for coding. No trendy chiclet bullshit.
  • Ports and slots is the area, where the age of the machine shows the most. It doesn’t have any USB 3.0 ports and Mini DisplayPort would be actually nice. Still, it’s much better than X240.
  • There are two rows of buttons, one for clit mode and the other for the touchpad. Although I work with clit most of the time, I find having an additional bottom row rather convenient, yet I’d probably go with no touchpad at all.
  • Flashlight!
  • The only problem with X201 model for me is that it’s not available for sale officially anymore (at least where I live), I even tried to buy out my office X201, when leaving the company but they wouldn’t let me. So I found a place that sold used ThinkPads for a reasonable price and bought one from there. This machine is pure magic, and it doesn’t matter that it’s a bit outdated. It has i3 CPU (which is still fair these days), up to 8GB RAM (which is usually enough), extended 6 cell battery makes up for its age (easily gives me 6 or 5 hours of relaxed coding) and overall design hints at the times when the word Thinkpad meant something more than “an ugly plastic Macbook knockoff”. Without much exaggeration I can say, that in 12.5″ X line of ThinkPads (at least to me) the X201 model seems greatly superior to anything made before (due to being relatively modern) or after. It’s still relevant today and has the potential of being developer’s muse (fetishist talking) and workhorse.

    An update is due: although I still think X201 is one of the best ThinkPad X-series machines, after a heated Reddit discussion X220 seems to be an even better model with all the advantages of X201 (except the amount of LEDs), plus newer hardware and better screen. You probably should consider that machine if you are shopping for classic keyboard ThinkPads.

    Written by wswld

    March 14, 2015 at 10:47 pm

    ASUS X102B and second screen on Debian

    leave a comment »

    I’ve stumbled upon a very capricious piece of hardware lately, which is ASUS X102B. Basic (very basic) video seems to work out of the box, but there are numerous problems here and there. Especially the second screen, which doesn’t seem to work at the first glance. If you run xrandr it will only recognize the “default” output and even that would look pretty much broken. First thing to do was:

    sudo apt-get install firmware-linux-nonfree
    

    Actually, it’s a go-to solution (as in first thing to try) to many Debian hardware-related problems, as the distro doesn’t include non-free firmware by default. After that it recognizes most outputs the right way, but it may miss the right modes. If so, you could add your mode manually.

    First, retrieve the full information about the mode:

    cvt 1920 1080
    

    You will get something like that:

    # 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
    Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync
    

    Now use the info to add the mode to xrandr and then assign it to the output:

    xrandr --newmode "1920x1080" 173.00  1920 2048 2248 2576  1080 1083 1088 1120 -HSync +VSync
    xrandr --addmode VGA-0 "1920x1080"
    

    Use the xrandr command to check the list of availible outputs and modes. You probably know that already, but here is how to use xrandr with the newly created mode:

    xrandr --output VGA-0 --mode "1920x1080"
    

    Well, as this one issue is officially resolved I’m off to fight the rest of a couple hundred problems, that arise, when trying to use this laptop with Debian. Wish me luck and leave a comment if you had some trouble with the machine (when used with Debian that is) — we could try to work it out together.

    Written by wswld

    February 2, 2015 at 9:17 pm

    Update on BioAid and my Hearing in General

    with one comment

    This post was written about a year ago, but it has been lost among the drafts, as I haven’t been a frequent guest here lately. Now it’s time to finally publish this piece for good.

    Some time ago I did a big, comprehensive review of the BioAid hearing app for iOS. A couple of months down the road, the situation has changed entirely. I’ve got some good news and some bad news for you. Let’s start with the latter.

    Bad News

    I’ve joined the dark side and after several months of struggling, I’ve finally bought a commercial hearing aid. It is not a spontaneous decision though, I’ve been thoroughly thinking it over for a couple of weeks. Here is what made my mind up:

    1. I’ve started to notice some discomfort while using BioAid, for the most part it was minor headache and mental fatigue. I’ve been taking some medications, that could have caused this effect too, but I do believe BioAid had its share of responsibility in driving me into this condition. As I’ve been using Gradual HF regime, it could be a little too high frequency in my case. Not high enough to notice that at once, but it had some profound effect on me in the long run. I was feeling ruined by the weekday evenings after the full day of continuous use of BioAid at work, and I was feeling OK on weekends, when I hardly used the app at all. I think it was natural fatugue, combined with the sound irritant of BioAid. I’m not saying that you’re sure to be feeling exactly the same way, but I strongly recommend you to stop using the app as soon as you feel any side effects. You should also consider starting using the app twice if you have any sort of rare medical condition. The creators of the app warn you about that themselves.
    2. Annoying state of being unable to use my iPhone to its full potential throughout the day and implications of using it as a hearing aid in day-to-day situations made me feel quite miserable. If you’re interested in what I’m speaking about, I’ve been writing about all the limitations in the original post. However, it could be alright, if you really can’t afford a hearing aid, or want to use the app as a temporary solution.

    As a result, I went to the same center, where I’ve refused to buy an aid in the first place and bought the same exact aid I had been offered back then. It is OK (Widex, by the way), but I’ve not changed my mind completely. I do think that devices like BioAid are the future of hearing aid market, which is really underdeveloped and monopolistic in this day and age.

    Good News

    Some time ago I got this letter in my inbox:

    Dear Vsevolod,

    You contacted me a couple of months back about the original BioAid app. I’d like to let you know that I’ve been looking at the hearing app idea again recently and have just released (yesterday) a rather more powerful and flexible piece of software. Check out aud1.com for more details and don’t hesitate to get back to me if you have any questions.

    Best,

    Dr. Nick Clark

    Dr. Nick Clark is one of the scientists behind the original BioAid project (the one who wrote most of ObjC code, actually) and Aud1 is his solo project. Yes, basically it’s BioAid 2.0 and it’s paid now. Actually, it’s not BioAid 2.0, but rather implementation of BioAid algorithm, as Nick Clark himself explained it:

    I’d just like to clear up any confusion that I may have caused by my haphazardly typed original email! Aud1 is not the new name for BioAid. BioAid is the name of a biologically-inspired open-source gain model. The original BioAid app was a particular implementation of this algorithm (confusingly also named BioAid, but referred to in-house as “the BioAid app”). Aud1 is a much more flexible framework that has been developed independently by one of the original BioAid team (me), and currently runs an optimized version of the BioAid algorithm. However, there are plans to allow the user to switch between various algorithm designs in the future, potentially making Aud1 a useful research tool for field comparisons. Switching algorithms is not like changing the processing strategy on a hearing aid, but rather more like switching out an entire part of the hearing instrument.

    Aud1 is a platform for the BioAid algorithm, and potentially other algorithms in the future, allowing it to behave more like the lab scale version that we used (providing features like linked stereo processing if the user has appropriate input hardware). Aud1 is no more a hearing aid than the original BioAid app can be considered a hearing aid, because they are just a software component restrained by the limitations of the devices on which they run. I prefer the deliberately vague term “assistive hearing technology”. Limitations aside, the the BioAid app really seemed to help a select group of people, and this motivated me to push the technology further, adding many features requested by BioAid-app users. Check it out if you like.

    I installed the app and field-tested it right away. I was glad to see that some of the annoying issues of the original version were gone. The app features much cleaner interface and more flexible configuration with sliders instead of fixed presets. There are no more welcome popup screens appearing every time on startup and the app seems to preserve the configuration on relaunch.

    It also has introduced some new features like an ability to choose bit rate of the output, support for stereo, latency test and input/output calibration. It also provides some basic session info and a logger for the tinkerers. The application now looks more mature and ready for commercial distribution. Although no essential improvement over the original app was introduced, it looks, feels and hears much better, which is enough for me to reach for my wallet. Still, there are issues, that were ignored, like returning to hearing aid mode after a call (as the stock music app does) and some other minor problems. Regimes are the same for the most part (albeit a tad more configurable), hence it haven’t solved my headache problem. Eventually, I’ve abandoned the concept of iPhone as the everyday hearing aid for now. Again, it doesn’t mean it would not work for you. Give it a try.

    By the end of the day I do think that this version is worth every pence, even if you’re not particularly amazed with the new features and improvements. You may consider it a little contribution to an amazing project, especially if you have been using the original BioAid for some time already. After many months of extensive BioAid usage, I was glad to pay it back. Hopefully, you would be too. If you’re completely new to this kind of apps, my advice would be to try BioAid first and see, whether you’re not experiencing any of the side effects and it does help your hearing, then you could easily migrate to Aud1.

    A little year-down-the-road update is due. As of now the projects seems a abandoned: last updates on BioAid and Aud1 Facebook pages date back to September 13, 2013. It is quite unfortunate as the project showed big promise. Hopefully, Nick Clark haven’t abandoned this idea completely and works on something new in the same vein. Time will tell.

    Written by wswld

    September 24, 2014 at 1:20 pm

    Posted in Uncategorized

    Tagged with , , , ,

    Follow

    Get every new post delivered to your Inbox.