This post was written about a year ago, but it has been lost among the drafts, as I haven’t been a frequent guest here lately. Now it’s the time to finally publish this piece for good.
Some time ago I did a big, comprehensive review of the BioAid hearing app for iOS. A couple of months down the road, the situation has changed entirely. Now I’ve got some good news and some bad news for you. Let’s start with the latter.
I’ve joined the dark side and after several months of struggling, I’ve finally bought a commercial hearing aid. It is not a spontaneous decision though, I’ve been thoroughly thinking it over for a couple of weeks. Here is what made my mind up:
- I’ve started to notice some discomfort while using BioAid, for the most part it was minor headache and mental fatigue. I’ve been taking some medications, that could have caused this effect too, but I do believe BioAid had its share of responsibility in driving me into this condition. As I’ve been using Gradual HF regime, it could be a little too high frequency in my case. Not high enough to notice that at once, but it had some profound effect on me in the long run. I was feeling ruined by the weekday evenings after the full day of continuous use of BioAid at work, and I was feeling OK on weekends, when I hardly used the app at all. I think it was natural fatugue, combined with the sound irritant of BioAid. I’m not saying that you’re sure to be feeling exactly the same way, but I strongly recommend you to stop using the app as soon as you feel any side effects. You should also consider starting using the app twice if you have any sort of rare medical condition. The creators of the app warn you about that themselves.
- Annoying state of being unable to use my iPhone to its full potential throughout the day and implications of using it as a hearing aid in day-to-day situations made me feel quite miserable. If you’re interested in what I’m speaking about, I’ve been writing about all the limitations in the original post. However, it could be alright, if you really can’t afford a hearing aid, or want to use the app as a temporary solution.
As a result, I went to the same center, where I’ve refused to buy an aid in the first place and bought the same exact aid I had been offered back then. It is OK (Widex, by the way), but I’ve not completely changed my mind. I do think that devices like BioAid are the future of hearing aid market, which is really underdeveloped and monopolistic in this day and age.
Some time ago I got this letter in my inbox:
You contacted me a couple of months back about the original BioAid app. I’d like to let you know that I’ve been looking at the hearing app idea again recently and have just released (yesterday) a rather more powerful and flexible piece of software. Check out aud1.com for more details and don’t hesitate to get back to me if you have any questions.
Dr. Nick Clark
Dr. Nick Clark is one of the scientists behind the original BioAid project (the one who wrote most of ObjC code, actually) and Aud1 is his solo project.
Yes, basically it’s BioAid 2.0 and it’s paid now. Actually, it’s not BioAid 2.0, but rather implementation of BioAid algorithm, as Nick Clark himself explained it:
I’d just like to clear up any confusion that I may have caused by my haphazardly typed original email! Aud1 is not the new name for BioAid. BioAid is the name of a biologically-inspired open-source gain model. The original BioAid app was a particular implementation of this algorithm (confusingly also named BioAid, but referred to in-house as “the BioAid app”). Aud1 is a much more flexible framework that has been developed independently by one of the original BioAid team (me), and currently runs an optimized version of the BioAid algorithm. However, there are plans to allow the user to switch between various algorithm designs in the future, potentially making Aud1 a useful research tool for field comparisons. Switching algorithms is not like changing the processing strategy on a hearing aid, but rather more like switching out an entire part of the hearing instrument.
Aud1 is a platform for the BioAid algorithm, and potentially other algorithms in the future, allowing it to behave more like the lab scale version that we used (providing features like linked stereo processing if the user has appropriate input hardware). Aud1 is no more a hearing aid than the original BioAid app can be considered a hearing aid, because they are just a software component restrained by the limitations of the devices on which they run. I prefer the deliberately vague term “assistive hearing technology”. Limitations aside, the the BioAid app really seemed to help a select group of people, and this motivated me to push the technology further, adding many features requested by BioAid-app users. Check it out if you like.
I installed the app and field-tested it right away. I was glad to see that some of the annoying issues of the original version were gone. The app features much cleaner interface and more flexible configuration with sliders instead of fixed presets. There are no more welcome popup screens appearing every time on startup and the app seems to preserve the configuration on relaunch.
It also has introduced some new features like an ability to choose bit rate of the output, support for stereo, latency test and input/output calibration. It also provides some basic session info and a logger for the tinkerers. The application now looks more mature and ready for commercial distribution. Although no essential improvement over the original app was introduced, it looks, feels and hears much better, which is enough for me to reach for my wallet. Still, there are issues, that were ignored, like returning to hearing aid mode after a call (as the stock music app does) and some other minor problems. Regimes are the same for the most part (albeit a tad more configurable), hence it haven’t solved my headache problem. Eventually, I’ve abandoned the concept of iPhone as the everyday hearing aid for now. Again, it doesn’t mean it would not work for you. Give it a try.
By the end of the day I do think that this version is worth every pence, even if you’re not particularly amazed with the new features and improvements. You may consider it a little contribution to an amazing project, especially if you have been using the original BioAid for some time already. After many months of extensive BioAid usage, I was glad to pay it back. Hopefully, you would be too. If you’re completely new to this kind of apps, my advice would be to try BioAid first and see, whether you’re not experiencing any of the side effects and it does help your hearing, then you could easily migrate to Aud1.
A little year-down-the-road update is due. As of now the projects seems a abandoned: last updates on BioAid and Aud1 Facebook pages date back to September 13, 2013. It is quite unfortunate as the project showed big promise. Hopefully, Nick Clark haven’t abandoned this idea completely and works on something new in the same vein. Time will tell.
Gotcha 1: Getting Started with Sphinx Theming
An update is due. It seems absolutely obvious to me, but apparently it is not that obvious to some: you can’t do anything about your Sphinx theme without some basic CSS and HTML skills. You can only change some of the theme parameters, if the developer was kind enough to add some, but it is well covered in the official documentation.
Customizing Sphinx visuals was somewhat upsetting to me at first, since the process is not straightforward and documentation is scarce. It could be a little bit user-friendlier. However, as soon as you get a grip on the basics, it gets pretty smooth and simple. You may have already tried copying over the default theme only to discover, that there is nothing particularly useful in there for an inquisitive scholar. Only one of the standard Sphinx themes is in fact complete, the rest of them simply inherit its properties to add some minor alterations. This theme is called Basic and it’s a minimal sandbox template, the only theme that could be helpful for getting to the very bottom of Sphinx customization. Later you’d be able to inherit it and create a template, consisting only of alterations, but for a start it’s OK to copy the Basic in its entirety.
Hopefully, you’ve already created a folder for your Sphinx project and initiated it by issuing:
Or you may have an already existing Sphinx project, you want to theme — it’s up to you. In your project folder create a _themes directory and then rename and copy the theme folder there. Basic theme, perhaps, should be located in site_packages folder of your active Python install. On my Mac it’s: /Library/Python/2.7/site-packages/Sphinx-1.2b1-py2.7.egg/sphinx/themes/basic/, but Python on Mac OS X is just weird. If you’re using Linux (/usr/share/sphinx/themes/basic/ on Debian) or Windows, you should look it up yourself.
Next step would be to change conf.py of your project, accordingly. First, we should make sure, that the following line is uncommented and correct:
html_theme_path = ['_themes']
Don’t forget to check the value of the html_theme parameter above:
html_theme = 'renamed_basic'
As of now you may start making alterations to the theme. You will find that HTML files in there aren’t really HTML files, but templates (Sphinx uses Jinja template engine) with some staff automatically inserted on build. You can combine these automatic tags with basic HTML tags and as soon as you figure out how it works, you could move some of the interactive tags around, or get rid of some of them altogether. Don’t forget to check whether you’re not breaking anything though. Most visual aspects of Sphinx theme are modified through the main CSS file, which is located in the static folder. For Basic theme it would be basic.css_t. Notice, that t in the extension brings to our attention the fact, that this is a template. Other than that it could be viewed and edited as a simple CSS file. If you’re interested in the values, provided by Sphinx templates and how you cam make use of them, consult the official documentation. If you want the main CSS file to have some other name, you can change that in theme.conf, there are also some other settings, that could be of interest.
Gotcha 2: Spoiler
.. admonition:: Request :class: splr Request example and parameters.
If implemented correctly, this script should turn the splr admonition into a collapsible drawer. I’ve been actively using this, when documenting HTTP APIs, since it’s very helpful to hide JSON responses by default. Note, that you could also use this method for special CSS effects. Imagine, if aside from usual note, warning and tip you could have yellow, blue and purple boxes for whatever reason you can think of? Well, it couldn’t be any simpler. Admonitions are good default containers for some parts of your text, that should differ in design or function from the rest of the page.
Gotcha 3: Interactive TOC in Sidebar
You may have noticed, that Sphinx often adds TOC to sidebar automatically, if it’s not explicitly placed in the page itself. While this is certainly a very useful feature, sometimes things get out of control. I didn’t use the worst case in the picture on the right, but it could get up to innumerable 1st level sections, each of them could have a number of subsections and so on. Sometimes it is a clear sign, that the page should be reorganized into multiple standalone pages, united by a category, but it’s not always possible or needed. It’s perfectly alright to have a long and deep TOC in some cases and Default Sphinx theme is terrible in that regard.
You could use an updated version of the algorithm from the previous example to collapse some parts of the TOC by default. Note, how this script works with ul and li tags of the TOC tree list. Some stuff is applied recursively on the highest level of the list, some — on the subsequent levels. Especially it is well observed in different styles applied to different levels of the list, so that you could tell whether it is 1st, 2nd or even 3rd level title. Here is the full script:
Perhaps, it won’t look very well, but it is a very simplified version to illustrate the concept. If you get the idea, you may alter or add .css methods to achieve more plausible visuals. You could also work on lower title levels, but you will have to figure it all out for yourself. In this version algorithm works best only with three headings levels from <h1> to <h2>, but it could definitely handle more.
I’ve compiled these examples as a full-featured theme project on GitHub. I’m going to polish it to some extent and perhaps, implement more interactive stuff over time. Feel free to contribute to this little project. Nope, never happened. Well, it did happened actually, but it’s no good, sorry. If you come upon any issue with this little gotchas of mine, let me know. Sure, I’ve tested everything myself, but you never know. Also, you’re free to use any of these examples as a building block in your work with no attribution, since they’re rather generic and simple.
I can safely confess, that a couple years ago I didn’t know a single thing about programming. I was aware of some fairly abstract concepts and had a basic understanding of how it all works, but it definitely wasn’t enough. My English teacher had a saying about the active vocabulary: “You may learn all the words from the dictionary by heart, but unless you use them regularly and naturally, you don’t really know them”. My situation with programming was somewhat similar to knowing lots of trivia, but having no grasp on the practical side of things. I was determined on fixing it as soon as possible. I’ve tried reading a book or two, but it never really got me going. Well, it explained a couple things here and there, but it was like learning things by heart — tedious and irrelevant (on an absolutely unrelated note: Learn Python the Hard Way is great). At that point one of my techie friends suggested me throw the book away and learn by immersion: make an objective, stumble upon problems, see docs and StackOverflow for possible solutions. That was the moment I started looking for the first project, fairly simple, yet more challenging than a mindless Hello World routine.
Once, I was typing down a big portion of plain text on my old slow Android phone, using another memory hog Office Suite, with all those controls, sets of buttons on all sides of the screen and I wished there was something like Focus Writer for Ubuntu: basic, but fairly powerful in terms of achieving that special zen state. There weren’t many such projects in Android market back then (yeah, kids, it was called that in days of old) and this is how the idea of dType has stricken me. The concept was fairly simple: a minimalistic tool, that would let you jot down some text and then pass it to some other application (Evernote, Dropbox, Email, etc.) for saving or processing. It was simple enough to get grip at basics, yet quite challenging for a person, who haven’t seen Java code (or any code) before.
It was the moment, when I started coding. Well, let’s say it was more about googling intensively for just about anything. It was hard. Most of the time I didn’t know, what happens and asked fairly inept questions on StackOverflow. I still do, but now at least I can tell, what most parts of my code are doing or supposed to do. First, the immersion is like trying to play piano blindfolded — my code probably stunk a big time, but at the end of the day it worked and it was encouraging. Interest in Android development helped me to get a job as a technical writer in a bunch of Android-related projects, especially OpenCV for Android. Since, I’ve been working mainly on C++ API references, I’ve started to delve into OOP concepts. I’ve been thoroughly explained, what is a class, a method and how they relate to each other, interfaces, abstract classes and the rest of this stuff. I’m extremely grateful for my mentors at there. Later, working on some other project, I had a chance to look closer at working Java code and see these concepts applied to Java. I immediately started to refactor dType code once again in attempt to implement thorough OOP design and shake off all the redundancy. My code became a little bit more laconic and neat. Not that it couldn’t get any better, but it was still a huge leap forward for me.
As long as I remember, dType was constantly improving. It was first a bunch of undocumented spaghetti code, which was somewhat straightened out at version 0.16 — it became the earliest version I bother to keep in the repository history, since everything before that was a complete disaster. Perhaps, it’s still rather bad, but I’ve managed to shorten it almost twice, provide descriptive JavaDoc (for the sake of it, I know no one will probably bother to read) and fix a lot of issues while at it. I do feel a little attached to this code emotionally, since it is my first coding experience, that has grown into a little indie project of mine. Over the course of two years it has provided me with innumerable challenges and priceless practical experience, but it’s finally time for me to move on. I’ve taken great interest in Python lately, and started a couple of projects in it. Coming back to Java code became more and more daunting to me. I was also advised by several programmers, that I’d better concentrate on getting really proficient with one language for now. My growing frustration with Java verbosity ensured, that I would end up with Python as my language of choice.
Still, it was a hard decision for me to drop dType completely. People do use it and clone it on GitHub (yeah, it had a couple of official clones before the project has been moved back and forward, nevermind the actual numbers on GitHub). This project, though certainly quite niche and facile, does work for some. I decided, that this suspension is going to be more of a role shift for me: from active developer of this application to its maintainer. It will stay as an open repository at GitHub for you to clone and alter, it will stay published on Google Play. You can continue to use it in version 0.71. If people provide some relevant pull requests, I would be happy to merge them and even publish the resulting build as the new version of the app. It’s just that me myself don’t have the time or inclination for introducing new features anymore. It is now exactly the way I envisioned it, when I was starting. My big learning project has reached its objective. It’s finished. My priorities have changed, but if you do care, I would be glad to see your contributions. I’m not naive to think that it could become a huge open source project, mind you, but I do hope, that the app could continue living on its own, while I’m gone.
Update: No one decided to contribute to this project yet and probably never will as more than a year has passed since this post. Perhaps, this is for the best, as I’ve seen people complaining about a couple of nasty visual bugs on some devices. Me myself wasn’t able to work with it on Galaxy SIII as the screen gets black from time to time. So if you really want to revive the project, I can only wish you good luck with that. Seriously, if you want to try, give me your contacts, so that I could talk you out of this.
I would be speaking about my own experiences and you may have noticed my in the title, which is there to remind you about the subjective nature of this article. Egocentric to the extreme. I don’t dare to speak about your blog or any other blog out there, but my own. I have no intention to convert you to my side, yet I would be very glad to see some of the like-minded people out there. I know they exist. My own research on the topic has unveiled, that although static site generators have this really substantial and zealous following, there are sober voices in the crowd, appealing to common sense. This is exactly why Kevin Dangoor went “From WordPress to Octopress and Back” or why Michael Rooney is “Migrating from Octopress to WordPress” — in the exact opposite direction from the majority of switchers.
However, it doesn’t boil down to WordPress vs Octopress, as the issue in hands is much broader and may be represented as dynamic site engines vs static site generators. If you’re not aware of the difference between the two, here are the basics: with dynamic site your content is generated dynamically by an application running on the server side, static sites, on contrary, are pre-generated or written outright in HTML. Basically, dynamic sites are a web-applications that could change their behaviour instantly, depending on input and other factors, while static sites: well, they’re just HTML files passively sitting there and could be opened and read. Sure, with introduction of jQuerry, Java Script and HTML 5 to the mix, the difference gets a little less distinctive, but let’s stop at this level.
So, static vs dynamic. It could be virtually anything: Tumblr vs Hyde, Blogger vs Pelican, Movable Type vs Jekyll, etc. Major differences between the two models are more or less the same. It means that we should be really comparing the models themselves, not their instances.
So, what are the lucrative advantages of the static generation model? What makes people switch so quickly and without looking back, as they say? I came up with 3 most important reasons:
- Almost endless customizability.
- It’s mostly plain markup text files, that you can use with Git.
- Increased loading speed and security.
All three are valid points and at some moment I went down the static path myself with all three in mind. I went with Jekyll, then switched to Octopress. At some point I even tried to make Sphinx-generated site (sic!) work as a personal web-page, but lack of blog awareness and increasing complexity made me abandon this idea. When they say that running this kind of site is the easiest things to do, this is complete nonsense. In terms of comfortable workflow, I only can say a couple good things about pure Jekyll paired with GitHub Pages, but the result was so raw and required so much customization, that straightforward workflow was hardly an advantage. It is positioned as a toy for true geeks, tinkerers, but I don’t see how anyone could really benefit from this kind of tinkering. I work in IT and we are here mainly to solve problems, not create tons of complimentary issues. Instead of reinventing the wheel, you could as well invest your time into something, that really needs to be done, perhaps — writing.
I’m not the only one who noticed it, but for several months I’ve been experimenting with static generators, I’ve hardly written half a post. It was a common problem, as I googled it. People dug so deep into the endless customization and switching that eventually they’ve stopped writing. I may sound conservative to some, but I still think that blogging is mainly about content. Sure, to some extent, this problem is applicable to dynamic platforms too (ping pong between WordPress and Blogger, anyone?), but with static generators it grows to catastrophic proportions. They are the ultimate time drain for nerds and wannabes. Sure, if your time worth basically nothing and constant tweaking your blog is the best part about having it, be my guest. For me this kind of time misuse is inexcusable.
Another seeming advantage of static site generators is their reliance on simple text, that could be utilized in distributed version control systems like Git. Most switchers assume, that they already use plain text and Git for code, why not use it for a blog? The problem is, that it also complicates things instead of easing them. Each generator has its own super-easy-workflow with different special folders, commands and scripts — quickly it becomes a mess. In this regard Git is basically one more noose on your neck. I’d focus on one especially nasty implementation of Git in such workflow — Octopress. You should fork the original Octopress repo, the
_deploy folder will be used for deploy to the pages repository and sources should be committed to the special
source branch. Now imagine what would happen if a somewhat major update gets pushed to the upstream of Octopress and your copy has been significantly modified over time. As someone has put it: “Octopress is great, until it breaks”. If you have some extremal experience with Git and seen the Octopress workflow, you may imagine the hell it could possibly be. Actually, Octopress is itself a heavily modified version of Jenkins. No offence, but it all seems like Rupe Goldberg machine to me.
If you google it, you will find lots of people, performing full-featured benchmark tests of WordPress vs Octopress with a complete disregard for the principal differences between the two. People start speaking of security and speed benefits of static sites, forgetting about all the advantages of dynamic sites, that come at the price of increased complexity and bloat (yes, I do think WordPress is a tad bloated). Imagine the situation, that you need to jot down a post draft on a public or someone else’s PC? Will you be cloning the repo, installing ruby, Octopress — setting up all the environment to write a short post? What about mobile support? Should you attempt to clone Octopress to your mobile phone? What about preserving drafts in the cloud without publishing them, but having access to them virtually from anywhere and anytime? Can you really put a price on that? People start using Evernote or similar service for drafts and such, but does it really worth introducing another tool to your workflow? Does mobility and availability worth another couple seconds of load time? My own choice is comfort and efficiency. I want my blog to be complimentary to my technical endeavors, not the other way around.
I’ve started thinking that less is actually more long time ago and sometimes it may relate to blogging as almost any other area of our life. You may notice, that I don’t even use the standalone WordPress install, but the pretty limited hosted WordPress site. I prefer to pay engineers at WordPress $13 for domain mapping and settle for less choice in themes, plugins and other options to focus on writing. We’re all too lost in the world of different platform and workflow options these days. Google it and you will see hundreds of rants about why platform A is deliberately better, than platform B, why static sites are better, than dynamic. You almost never hear that they help you in writing, no. It’s all about SEO, storage space, customization, load speed and other insignificant stuff, not directly related to blogging. We’re too obsessed with form and seem to forget about crafting content. But, as I said before, it is all entirely subjective. You may still go down the static route, customize the ass out of your blog. You may even spend several years on writing your own static or dynamic blog engine from scratch, that will sure be absolutely unique and different from anything ever done before. Yet, I’m writing this post in a beautiful distraction-free WYSIWYG editor and my draft will be preserved online, when I press the Save button and no
rake deploy is needed ever again.
Couple of weeks ago me and my wife went to a hearing aid center for a free consultation before, perhaps, buying a new hearing aid. The previous one has been working just fine for five years or more and it has became unusable lately. In the center they have run all the required tests, created an audiogram and offered a couple variants to choose from. I was satisfied with the devices, however I haven’t recognized any major difference to the one, that I already had. I was hoping to manage with $600-$500, and I was shocked when they named their price: even the cheapest of the devices was around $1000. We could afford it, but I declined.
Being a person with inborn defect of the auditory nerve, I’m wearing a BTE hearing aid all my life since the early childhood and I still remember the day, when I put the thing on for the first time. For a boy, who haven’t been hearing the sounds of footsteps on asphalt or the birds twittering, etc, it was a marvelous discovery to hear all those sounds for the first time. Later in life, when I was wearing my fifth of sixth hearing aid this wonderful piece of technology was already taken for granted and I was actively using it in school, university and later — in the workplace. Gradually, I started to recognize quite a few shortcomings of modern hearing aids:
- Most doctors would suggest you wear aids on both ears, since it is really good at helping you to locate the sound and experience the stereo or 3D hearing. Wearing two devices may be considered tempting if you doesn’t do anything else with your ears like using a phone, headphones and participating in all the different kinds of intense activities (sport?), when people may unwillingly flick it off. It’s a physical inconvenience of having something plugged into your ear that is not that simple to take off, but paradoxically simple to drop. People using them day-to-day would understand, what I’m talking about. Headphones and phone would also require you to take the aid off first and don’t get me started on the horrible phone regime, which is available in most modern aids. This was actually how I lost one of my aids: I needed to use a phone, took it off and missed my pocket. Never saw that device again.
- Whistling. Yes, they are constantly whistling and it is a curse upon the people with limited hearing. They whistle even more as the plastic ear mold of the aid is wearing off, which means that ideally it should be replaced every year. Whistling is produced by the mic and the speaker in the ear mold since they are too close to each other and the feedback sound is being produced. Ideally the ear mold should hermetically fit in the ear, to avoid the feedback, but at times it sticks out anyway, and when it does — it’s unbearable.
- Close sourced software and hardware parts. This industry is controlled by a bunch of electronics industry giants (Siemens, Phonak, etc) and they became to some extent monopolistic in this market, since only they had the initial resources to support research and production of hearing aids. Of course they are laying out all the rules now, which leads us to the fourth and the worst limitation of all.
- Price. These devices are pricey as hell. It’s a mic, a little processor and a speaker. Yeah, the size is super-small, but it doesn’t add up to $1500-$2000 in my head, sorry. It’s just immensely overpriced. I’m not a cheap guy, but I do have a problem when people feed me up with “magic” and as I work in technology, I know that such claims are almost 100% marketing and outright bullshit. They know that most of us — the hearing impaired don’t have a choice and we’re forced to pay twice as much money for anything they come up with. If you look for the last ten years — they are going round in circles. Hearing aids haven’t seen any revolutionary improvement for decades, compared with the booming technology market in consumer electronics and it’s just sad.
Considering all the above, there is great demand for some open solution in the market. Something you could thinker with yourself and use as a temporary or permanent substitution for a commercial hearing aid. To achieve that, it should be capable not only of recording, amplifying and reproducing the sound, but it also should be smart enough to amplify only some frequency ranges, depending on type and severity of the hearing damage. It would need some computational power to process the sound. Ideally, it should also analyze the sound and get rid of background noise, while normalizing the rest of it (making it quieter or louder depending on the context). Modern smartphones are perfect candidates, since they have everything we need in a hearing aid. I started looking for solutions available as an iPhone app and stumbled upon BioAid.
BioAid is an app, implementing a full-featured hearing compensation algorithm, developed by a team of scientists in the university of Essex. They themselves stress on the fact, that this is not about an iOS app, but the algorithm at the heart of it, which took years of research and continues to evolve today.
Initially, the research was not concerned with hearing aids at all but with the construction of computer models of how hearing works at a physiological level in the auditory periphery.
However, the team has moved to working on hardware models and opted for mobile phones, since commercial hearing aids are almost impossible or too expensive to modify and require an agreement with the manufacturers, which is not that easy to obtain. Smartphones have everything, that a hearing aid needs (mic → processor → speaker), they’re compact enough and modern smartphones have sufficiently long battery life to perform on-the-fly sound processing almost all day. In my case it was a godsend and I rushed to test the app in everyday situations.
First thing I needed to do was to find the most suitable mode. For me it was simple since I’ve done million audiograms and knew that my hearing lacks some of the higher frequencies. After a quick scan I have found Gradual HF — the one I recognized at once as it reminded me of how all my aids sounded. My advice would be to start your scan with the first variants of every mode since some of the modes may be too loud or high in frequency and it’s just unpleasant to learn it the worst way. Surprisingly, finding the right mode is not a problem at all. I was afraid the app would require audiograms and it would complicate things. It’s definitely easier this way. Depending on the headphones (they have different levels and may alter the sound a little bit) I was best off with the 2nd and 3rd variants of Gradual HF mode.
I started testing the app in a park with lots of people walking, rolling and skating around. Although it was quite a test to start with I was pretty impressed with the results. It reminded me of times, when I put my first aid on. I heard everything happening around quite distinctly. Frequencies were altered in the right way. Sure iPhone headphones mic has its problems and I’m still hoping to find a better one, but other than that, I had no problems at all. It does reduce a little bit of background noise, depending on Gate value, however I wouldn’t recommend setting it much higher than default settings as it may cripple the other, more critical sounds. The problem with the standard Apple headphones mic boils down to missing out on sounds from behind or on the left (if you have the mic on your right side) occasionally, but it’s not critical. However, if you’re speaking with someone and the person is on the left — it may work a little less precisely than usually. The mic is also quite sensitive to wind and clothing rustle. Due to some lag you can’t use Bluetooth headphones, though. This is an iPhone issue since people watching videos with Bluetooth headphones sometimes notice that too.
Usually, I wear my hearing aid in office, since it is the only place, where most of conversations are critical and may happen almost spontaneously. I was also quite satisfied. I heard everything said on meetings, even better, than with my previous aid, and decided to use BioAid at least temporarily at work. The only problem I can imagine is people’s perception of wearing headphones all the time. Some people assume that you’re listening to music. My office is quite liberal and modern since it’s an IT company and a little Skype chat announcement worked, I can imagine, however, it couldn’t always work this well. Me personally, I find headphones more aesthetically tolerable than a BTE aid, since people are almost constantly wearing headphones nowadays. Another problem is that you may need to buy a battery extender case or enhance the battery life of your iPhone in some other way. My battery is just enough to live through an usual working day, avoiding, if possible other uses of the phone. I only listen to some occasional music on my commute in the morning and after work. If I took the phone off charger at 8 in the morning it is usually almost dead by 22 after the full 8-hour day. Battery life is my biggest concern with the app so far. I had thought of getting a separate iPod Touch player and run the app there as professor Ray Meddis does in this video.
Other minor flaw is that algorithm is implemented in mono, though in theory, stereo implementation is also possible. It is a problem since it may affect your perception of the direction, the sound is coming from. Even if it was processed as stereo, the iPhone standard headphones mic is mono, so the sound is mono by default. Perhaps, stereo would be even worse as a battery drain, so maybe it’s OK the way it is. It is specific to iPhone implementation only and not the algorithm itself. Speaking about iPhone implementation, there are also some minor issues that complicate the workflow: like the app stopping on phone call and not resuming afterwards, or welcome screen appearing every time the app is launched, but all of these are solvable.
Still, for now I’m not even thinking of getting back to commercial aids. I have a very strong impression, that BioAid approach is the future of hearing aids. Especially for people, who don’t have the hearing damage so severe, that they require deep in-canal aids or even implants, which is majority of people with hearing problems. Unloading the sound processing from the aid to a smartphone or similar device (iPod Touch?) may be the right way especially considering the fact, that going from nine to twelve channels adds up at least a thousand dollars to the price of the commercial aid and iPhone has enough computational power to process much more. Sure, there are still some problems, but most of them are of the implementation and are going to be fixed sooner or later. Algorithm itself is entirely open-source, which means you can fork it on GitHub and create your own version, addressing all of the issues described above, or providing support for some other platform. If you’re a hearing impaired person and you’ve decided to try BioAid for yourself, don’t forget to provide your feedback to the research group, since it may turn out very useful to them.