Overriding Default Werkzeug Exceptions in Flask

Let’s play a game here. What HTTP code is this exception:

{
"message": "The browser (or proxy) sent a request that this server could not understand."
}

No no, you don’t look at the code in response! That’s cheating! This is actually a default Werkzeug description for 400 code. No shit. I thought something is bad with my headers or encryption, but I would never guess simple Bad request from this message. You could use a custom exception of course, the problem is, that the very useful abort(400) object (it’s an Aborter in disguise) would stick with the default exception anyway.

Let’s fix it, shall we?

There may be several possible ways of fixing that, but what I’m gonna do is just update abort.mapping. Create a separate module for your custom HTTP exceptions custom_http_exceptions.py and put a couple of overridden exceptions there (don’t forget to import abort we’ll be needing that in a moment):

from flask import abort
from werkzeug.exceptions import HTTPException


class BadRequest(HTTPException):
    code = 400
    description = 'Bad request.'


class NotFound(HTTPException):
    code = 404
    description = 'Resource not found.'

These are perfectly functional, but we still need to add it to the default abort mapping:

abort.mapping.update({
    400: BadRequest,
    404: NotFound
})

Note that I import abort object from flask, not flask-restful, only the former is an Aborter object with mapping and other bells and whistles, the latter is just a function.

Now just import this module with * to your app Flask module (where you declare and run your Flask app) or someplace it would have a similar effect on runtime.

Note that you also should have the following line in your config because of this issue:

ERROR_404_HELP = False

I’m not sure why this awkward and undocumented constant isn’t False by default. I opened an issue on GitHub, but no one seems to care.

Remote Debugging with PyCharm

I’m working in a project now, that requires a certain (server) environment to run, hence it is developed on my local machine and then gets deployed on remote server. I thought I’m gonna say bye bye to my favorite PyCharm feature, namely the debugger, but to my surprise remote debugging has been supported for years now. It took some time to figure out (tutorials online are a bit ambiguous), so here is a short report on my findings.

For the sake of this tutorial let’s assume the following:

  • Remote host: foo_host.io
  • Remote user: foo_usr (/home/foo_usr/)
  • Local user: bar_usr (/home/bar_usr/)
  • Path to the project on the local machine: /home/bar_usr/proj

Here goes the step-by-step how-to:

  1. First we need to set up remote deploy, if you haven’t done so already. Go to Tools → Deployment → Configuration. And set up access to your remote server via SSH. I’d use:
    • Type: SFTP
    • SFTP host: foo_host.io (don’t forget to test the connection before applying)
    • Port: 22 (obvously)
    • Root path: /home/foo_usr
    • User name: foo_usr
    • Auth type: Key pair (OpenSSH or PyTTY)
    • Private key file: /home/bar_usr/.ssh/id_rsa (you’d need to generate the key and ssh-copy-id it to the remote machine, which is outside of the scope of this tutorial).
  2. Go to Mappings tab and add Deployment path on server (pehaps, the name of your project)
  3. Now under Tools → Deployment you have an option to deploy your code to remote server. These first three steps could be replaced with simple Git repository on the side of the server, however I sometimes prefer this way.
  4. Now when you have the deployment set up you can go Tools → Deployment → Upload to ..., note however, that it deploys only the file you have opened or the directory you selected in the project view, so if you need to sync the whole project just select your project root.
  5. I use virtualenv, so at this step I need to ssh into the remote machine and set up virtualenv in your project directory (/home/foo_usr/test/.env), which is outside of the scope of this tutorial. If you’re planning on using the global Pyhton interpreter, just skip this step.
  6. Now let’s go File → Settings → Project ... → Project Interpreter. Using gear button select Add Remote. The following dialog window would let you set up a remote interpreter over SSH (including remote .env), Vagrant or using deployment configuration you have set up previously. For the sake of this tutorial I’m going to put something like that there (using SSH of course):
    • Host: foo_host.io
    • Port: 22 (which is there by default)
    • User name: foo_usr
    • Auth type: Key pair (OpenSSH or PyTTY)
    • Private key file: /home/bar_usr/.ssh/id_rsa
    • Python interpreter path: /home/foo_usr/proj/.env/bin/python
  7. If you set up everything correctly, it should list all the packages installed in your remote environment (if any) and select this interpreter for your project.
  8. Now let’s do the last, but the most important step: configure debugging. Go to Edit Configurations… menu and set things up accordingly. For our hypothetical project I will use the following:
    • Script: proj/run.py (or something along these lines)
    • Python interpreter: just select the remote interpreter you have set up earlier.
    • Working directory: /home/bar_usr/proj/ (note that this is working directory on local machine)
    • Path mapping: create a mapping along the lines of /home/bar_usr/proj = /home/foo_usr/proj (although this seems pretty easy, it may get tricky sometimes, when you forget about mappings and move the projects around, be careful).
That’s it. Now we should have a more or less working configuration that you could use both for debugging and running your project. Don’t forget to update/redeploy your project before running as the versions may get async and PyCharm would get all whiny about missing files.

Humble Collection of Python Sphinx Gotchas: Part II

Gotcha 1: Release and Version

Sphinx makes a distinction between the release code and the version of the application. The idea is that it should look this way:

version = "4.0.4"
release = "4.0.4.rc.1"

Most project use a much simpler versioning convention, so they would probably do something like this:

version = "4.0.4"
release = "4.0.4"

I’d been doing this myself for some time, until I realized the conf.py file is a simple Python file (no shit!) and it is perfectly fine to do something like this:

version = "4.0.4"
release = version

Yeah, kinda obvious I know. I however missed this (though I’ve been putting much more complex code to conf.py) and some people did either. So, I’ll just leave it here.

Gotcha 2: Download as PDF

Sometimes a PDF download should be provided along with the hosted HTML version. It looks good and people can get a well-formatted file to use locally as they are trying to work with your API or product. In short: it could be done easily, using Python. I’m really pissed off at people, who know Python or Bash and they still keep asking, whether there is an automatic way of doing that. Well, it doesn’t get more automatic than that:

# generating latex and pdf
make latexpdf

# generating html
find 'index.rst' -print -exec sed -i.bak "s/.. &//g" {} \;
find 'index.rst' -print -exec sed -i.bak "s/TARGET/$UPTARGET/g" {} \;

VERSION="$(grep -F -m 1 'version = ' conf.py)";
VERSION="${VERSION#*\'}";
VERSION="${VERSION%\'*}"

sphinx-build -b html . ../$TARGET/$VERSION/
cp latex/$UPTARGET.pdf ../$TARGET/$VERSION/

Note, that there should be the following line, inserted into the index.rst in the example above:

.. &  `Download as PDF <TARGET.pdf>`_

Where .. is the comment syntax, & is here to distinguish the line from usual comments and TARGET is replaced with $UPTARGET which is the upper case version of the project name and the default name of the .tex and .pdf files. It creates a relative link to the .pdf file, which is then copied to the exact same folder, where HTML output is located. I’m not going explain much about the variables, as their sources may differ. In my work I use a python script, with exact same principle (I figured bash example would be more universal) and it gets values of $TARGET, $UPTARGET and $VERSION from a JSON file with a list of targets (more on that in the next example). In the example above, I’m stripping values off the conf.py file. In fact you can use whatever input you wish, even pass the values as arguments. What I was trying to illustrate is the concept itself.

Gotcha 3: Using Scripting to Organize the Sphinx Project as a Multiple Project Knowledge Base

Some of the companies, I’ve been working at had this huge array of active projects, that they wanted to present as a single site, or the whole variety of sites with the same theme, or the single site with PDF version for every first level subsection. Basically they wanted me to create a Sphinx-based knowledge base. Using a simple Python or Bash script there are ways to organize your project any way you want (we’ll use Python this time as it’s closer to what I’ve been using). We’re going to create a site that automatically builds PDF version for each first level subsection (project) and puts it alongside the subsection’s index.html. Basically this is a bit more complex variant of the previous example.

Let’s imagine we have a single Sphinx project with a couple first level sections corresponding to company’s projects, for example: Foo and Bar (give me that medal for originality, yeah). Basically, your folder structure will look like this:

Acme
|  index.rst
|_ Foo
|  |_1.0.0
|    |_index.rst
|  |_1.1.0
|    |_index.rst
|
|_ Bar
   |_1.0.1
     |_index.rst

Yeah, we also have versions. I use the following script for the projects with such layout. Don’t worry, it only looks kinda big. The script is rather simple. Also, I’ve commented the hell out of it so that you could figure it all out. Note, that you also need to create a targets.json file in the root of your project, containing the following lines (assuming we’re using the structure we agreed on in the beginning):

{
  "foo" : "Foo Foo",
  "bar" : "Barrington"
}

The file will tell the script of full project names and how they correspond to target names (folder names) in the structure. Also you will need to have a temp.py file containing only the info we need for PDF building with most of the target names and version numbers represented as variables for injection (yeah, I know this is hacky, but did’t want to bother with imports, dependencies etc). First of all it should have $VRSN tags:

# The short X.Y version.
version = '&VRSN'
# The full version, including alpha/beta/rc tags.
release = '&VRSN'

It should also have tags in the LaTeX part of the settings:

latex_documents = [
  ('index', '&TRGT.tex', u'&UPTRGT',
   u'ACME', 'manual'),
]

Other than that temp.py may resemble your usual conf.py. The reason is that we use conf.py for HTML and it has preset version and project name values for the project as the whole. So we better distinguish between the file for injections and the main configuration file, so that they don’t mess with each other. Note that if you’ll need to add some additional parameters or a preamble to the LaTeX output, you should do that in temp.py as conf.py is not used for building PDFs at all.

If we prepare the project this way, the script should build PDF’s for every subproject and put them to the subproject’s HTML root. Ideally the HTML version could also be built separately for every subproject (for the right project name/version to appear for every subproject). This script is more of a proof of concept rather than out-of-the-box solution. However if you now understand Sphinx’s capability to extension and automation, you may create projects of any complexity yourself.

Humble Collection of i3wm Lifehacks: Part I

Lifehack 1: Locking the Screen

This is kinda a nobrainer, but me myself sometimes look for a place, where to copy some of the syntax (I’m lazy and don’t always keep that in my head), so let’s start with this one. i3wm ships with beautiful and robust screen locker i3lock, which can be launched like that:

i3lock -c 000000

It will lock the screen with black overlay. The problem is, that you wouldn’t be typing this command every time you want to lock the screen. We need to add a shortcut to i3 config file:

bindsym $mod+Shift+Tab exec "i3lock -c 000000"

Now when we press the combination of mod-button (Win in my case) and Shift+Tab – our screen gets locked.

Lifehack 2: Activating/Disactivating the Second Screen

If you use i3wm on daily basis, you probably know, that the second screen is not turned on automatically. You should manage displays manually with xrandr command. If we run this command without attributes, we’re gonna get something like this:

Screen 0: minimum 320 x 200, current 3200 x 1080, maximum 8192 x 8192
LVDS1 connected 1280x800+1920+0 (normal left inverted right x axis y axis) 261mm x 163mm
   1280x800      60.02*+  50.05  
   1024x768      60.00  
   800x600       60.32    56.25  
   640x480       59.94  
VGA1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 521mm x 293mm
   1920x1080     60.00*+
   1680x1050     59.95  
   1280x1024     75.02    60.02  
   1440x900      59.89  
   1280x960      60.00  
   1280x720      59.97  
   1024x768      75.08    70.07    60.00  
   832x624       74.55  
   800x600       72.19    75.00    60.32    56.25  
   640x480       75.00    72.81    66.67    60.00  
   720x400       70.08  
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)

On some rare occasion writing something like this would not be a problem:

xrandr --output VGA1 --right-of LVDS1 --auto

Well, I know you’re probably well aware of how xrandr works. Just in case.

It may get pretty redundant if you use second screen on daily basis and regularly unplug it from your laptop. The best way to go would be to add script shortcut to your /usr/bin/ or /bin/ directory. Run the following lines:

printf '#!/bin/bash\n\nxrandr --output VGA1 --right-of LVDS1 --auto' > /usr/bin/screenswitch
chmod 755 /usr/bin/screenswitch

We can use screenswitch command which doesn’t make it much easier. What would certainly help us is a key shortcut, so let’s add a line similar to one in the previous hack to our i3 configuration:

bindsym XF86Display exec "screenswitch"

Now when you press on your special key combo (Fn+F7 on my ThinkPad), you enable/disable the second screen. Try some other key combination if you have no special display button. Of course the script itself is pretty basic and it would work only if your screen works well in auto and you use the same second screen daily. However, there are more complex scripts available all over the web (example).

Lifehack 3: Locking the Screen on Wake

If you use pm-utils with your i3wm setup, you’ve probably noticed that the screen is not locked, when the laptop is awakened after suspend or hibernate. It’s very insecure. Let’s try to fix it. Create file cat /etc/pm/sleep.d/91blocker and add the following lines to it:

#!/bin/sh
case "$1" in
        thaw|resume)
                su youruser -c '/usr/bin/i3lock -c 000000'
                ;;
        *) exit $NA
                ;;
esac

Don’t forget to change youruser to your username. Now let’s make sure we have all the right permissions:

chmod 755 /etc/pm/sleep.d/91blocker

Now, if we run pm-suspend or pm-hibernate our screen is going to be locked on wake. This script has one shortcoming though: it doesn’t lock the screen instantly, so you may see stuff for a couple of seconds before it gets locked. If it is not a critical issue to you, feel free to use it, othrewise you may need to work on it or find a different solution altogether. If you have any ideas how to improve it, let me know.

ASUS X102B and second screen on Debian

I’ve stumbled upon a very capricious piece of hardware lately, which is ASUS X102B. Basic (very basic) video seems to work out of the box, but there are numerous problems here and there. Especially the second screen, which doesn’t seem to work at the first glance. If you run xrandr it will only recognize the “default” output and even that would look pretty much broken. First thing to do was:

sudo apt-get install firmware-linux-nonfree

Actually, it’s a go-to solution (as in first thing to try) to many Debian hardware-related problems, as the distro doesn’t include non-free firmware by default. After that it recognizes most outputs the right way, but it may miss the right modes. If so, you could add your mode manually.

First, retrieve the full information about the mode:

cvt 1920 1080

You will get something like that:

# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync

Now use the info to add the mode to xrandr and then assign it to the output:

xrandr --newmode "1920x1080" 173.00  1920 2048 2248 2576  1080 1083 1088 1120 -HSync +VSync
xrandr --addmode VGA-0 "1920x1080"

Use the xrandr command to check the list of availible outputs and modes. You probably know that already, but here is how to use xrandr with the newly created mode:

xrandr --output VGA-0 --mode "1920x1080"

Well, as this one issue is officially resolved I’m off to fight the rest of a couple hundred problems, that arise, when trying to use this laptop with Debian. Wish me luck and leave a comment if you had some trouble with the machine (when used with Debian that is) — we could try to work it out together.