The author of the article above uses AST rewriting, which I have to admit is very clever, though hard to introspect and extend unless you are already familiar with AST manipulation in Python.
My approach is slightly different, and yields a less pretty syntax, but is arguably more flexible and extensible, by using a plain function. The definition currently looks like this:
def pype(x, *fs):
"""
Pipe function. Takes an initial value and any number of functions/methods.
Methods as strings. Additional args are supported for functions & methods
by suppling a step as a tuple/list with function/method as the first
element and the args as the rest. The pipe input is used as the last
argument in this case. Currently no kwargs.
"""
while fs:
f = fs[0]
args = []
if isinstance(f, (list, tuple)):
args = list(f[1:])
f = f[0]
if isinstance(f, str):
if f.startswith('.'):
x = getattr(x, f[1:])(*args)
else:
x = x[f]
elif isinstance(f, int):
x = x[f]
else:
x = f(*args + [x])
fs = fs[1:]
return x
The docstring is a bit abstract, so I think an example is much more explanatory:
from pype import pype
def add_suffix(number, s):
return '{} is {} cool!'.format(
s,
' '.join('very' for _ in range(number))
)
pype(
' abc: {} ',
'.strip',
('.format', 3),
(add_suffix, 2),
'.upper',
)
# 'ABC: 3 IS VERY VERY COOL!'
I am aware this is a very constructed example, but you get the idea. It currently does not handle keyword arguments, and you cannot specify in which place you would like the input argument to go, it always takes the last slot, which isn’t always convenient, but this way you can avoid using lambda
s everywhere, which are quite long in PythonI quite enjoy Elixir’s solution: &func(1, &1, 3)
, &1
being the placeholder.
.
I might extend this further in the future and maybe introduce it into some code bases of mine, if it turns out to be useful.
]]>The basic setup is quite simple, I am using Stack to manage build dependencies and sandboxing for HaskellThis is actually one of my basic requirements for new languages that I pick up. It is 2018, you can ship your language with a package manager that sandboxes by default. One of my biggest problems with Python is virtualenv.
. For reasons that become clear later in this post, I also need a LaTeX installation, which currently is not managed in any way, but I do not require anything out of the ordinary, so usually it is just a matter of install ing the distribution for my operating system.
To achieve a nice, human-readable URL scheme I am not only generating a slug from the original file name, which usually matches the title, but to get rid of the ugly .html
postfix I actually render all pages and posts to index.html
files in directories with the corresponding name, resulting in URLs with trailing slashes. Credit for this goes to Rohan Jain.
Of course this blog also has an Atom feedAtom vs. RSS has been debated for a while, in the end my use case is super simple anyway, so I am just using Atom until I find an actually valid reason to get into comparing the two formats.
, so you can follow my posts in your favourite newsreader, or use Firefox live bookmarks for example. I ran into one particular problem with this though, as one of my recent posts included an ampersand (&) in the title. The rendered feed file (no matter the format) would be invalid due to this. So I had to implement URL encoding for titles myself (this is already done for the body by Hakyll). Thanks to the way Hakyll embraces the Haskell philosphy, this was just a matter of mapping the encoding function over the post titles for the feed output.
A long while ago I had the idea of making my CV available online in the browser, like many front end developers do to showcase their skills. At the same time I still need a PDF version that can be printed neatly. Being a developer, I of course cannot fathom the idea of having two sets of CVs, so I thought why not generate both versions from the same source (of truth), using one single build process. So that is what I am currently finalising.
The HTML version for the website is just a static page in the blog, simple enough. Hakyll gives me very fine-grained control over the actual build process, so I can leverage custom Markdown tags to control layout if I need to. The PDF version of my CV has always been generated using LaTeX, because it generates beautifully rendered output in a reproducible fashion. Because I am using Pandoc to generate HTML from the Markdown source, I am also using it to generate the LaTeX source code from the same source, and then just pass it into a LaTeX template. Then I just run xetex
in a subprocess to render the final PDF.
This blog is currently hosted in two locations, Github Pages which I have been using for many years, and GitLab pages, which I only added recently. While the build and deployment process for these two platforms is slightly different, they mostly work off the same codebase, with the only difference being a makefile for Github being replaced by the GitLab-specific build file. The Github version I generate locally with my locally compiled Hakyll, and then push the the right branch using the makefile. This makefile also allows me to run a local server to preview the rendered output before committing. The GitLab repository is setup to mirror the one on GitHub and rebuild via GitLab CI on every change, so it is compiling the Hakyll application in a Docker container and the generating the output.
These two build processes have different pros and cons. The GitHub version is available slightly faster, as my local render only takes a couple of seconds and after pushing I just have to wait for Github’s cache to refresh, which usually takes only a couple of minutes, while the GitLab version has to run the CI job which takes a couple of minutes. On the upside the GitLab version does not require me to have a locally installed version of Haskell, Stack or anything else, as long as I can push to the repository, allowing me to explore workflows which happen end-to-end on iOS. I have been investigating this exact workflow, using a combination of iA Writer, Workflow and Working Copy to write, transform and push the posts, leaving the build process to GitLab CI.
If you are interested in details, have a look at the source on either Github or GitLab.
]]>The easy answer is that it should not really matter. Especially if you are working in a somewhat specialised environment or are under a lot of constraints, like low-power hardware or strict performance requirements, you will probably end up using a very domain-specific set of tools. A language like Kotlin is the recommended way forward if you are developing an android application, but looking at the TIOBE indexI have to mention that I believe the TIOBE index to be a bit biased though, not that I was perfectly objective. The overall “winner” of 2013 is Transact-SQL, a language many programmers will not even have heard of.
, and still it only comes in at position 49 at the time of writing, so obviously popularity cannot matter more than platform constraints.
Another quite important factor is the already present repertoire of tools. Many tools, especially languages and large frameworks, like Vue.js and React, require rather large upfront learning efforts before they can be used efficiently. If you already know React inside out, why would you learn Vue, assuming it fundamentally does not enable you to do anything React does not? I am currently paying for my food by writing code in a primarily Python/Django environment, and sometimes I look over at the neat lawn the Ruby on Rails crowd seems to have, but it would not be wise of me to invest a sizeable amount of time to switch ecosystems without actually gaining much.Apart from that Python is also larger and growing in comparison to Ruby. Ruby is probably not dying anytime soon, but it is definitely declining since the days when everything was Rails, much to my dismay.
I can already hear you type your furious tweets at me, claiming how popularity of course makes a difference. And you are not wrong, there are very valid arguments to be made.
Popularity does indeed matter when you chose a technology to use, for example if you need to add support for a new data format and shop around for libraries. Especially in the open-source world a larger user base means more eyes scanning the code for bugs, more potential contributors keeping the software updated and supplying it with new features, and better chances it will stay maintained overall. If you use a common set of softwares, chances are they can interact easily in some way, saving you time for integrating them. All of these are important factors which should be considered when making a voice of tooling.
In the end there is no single deciding factor to look at, like for example popularity, but instead a lot of different ones. It is crucial to recognise all of them and think about how to weight them in importance. Newer (and shiny) tools might offer significant gains, but come at the price of a smaller community, so less support and a higher risk of abandonment.
]]>My technical requirements are not outlandish, I need access to the system calendar, location services, trigger notifications, and that is about it. Performance should of course be great, but considering that we are talking about an app that essentially boils down to a lot of tables, and I know my way around handling large amounts of data, I am not worried about that.
Originally I started developing this app in Swift, using XCode for development. Apart from having to fight with a somewhat buggy vim-plugin for Xcode, package management for mobile environments on Swift is a big mess, though I managed to load in the libraries I needed in the end. Then there is Swift, which especially in the way Apple wants you to write apps, using the visual interface designer and control-dragging buttons into source files, is not ideal. It feels unnecessarily low level in a lot of places, and the default project structure promotes a rather imperative style, without any real solution to manage state in a sensible way.
So after having built a first prototype in the classic way on Swift, I decided to have a look around at alternatives. I did not want to use Cordova, which is essentially the Electron of mobile platforms, running your JS code inside a fullscreen browser. While I am not worried about performance, I could see this additional runtime layer making the whole UI less snappy. So I had a look at React Native, a framework based on React which compiles to native platform code through some magic. React is currently used at my workplace (along with some legacy Angular code), so I have access to some experienced React developers to help me at least for conceptual understanding, which was quite helpful.
While I have been interested in getting a bit more frontend experience, the prospect of having to write a lot of Javascript still was not overly compelling, so I had another quick look around whether some of the languages that compile to Javascript allow for easy integration with React Native. While there are some Elm wrappers, the ClojureScript wrapper Re-Natal around React Native along with re-frame & Reagent wrapping around React are much more mature options. I also wanted to have a more serious look at Clojure for a while, as I have heard that it is exceptionally well designed as a language.
So this is the stack I am now using, which so far has turned out to be a good decision overall. I do feel reasonably comfortable in Clojure(Script) at this point, though the whole compilation process is still a bit arcane to me, as it compiles over Javascript to native code, and there are a lot of moving parts I do not know a lot about.
The only issue I encountered so far was that running on a real device did not work in React Native 0.54, which has been fixed in 0.55, which in turn deprecated some interfaces my navigation library was using, causing lots of nice warnings to show up in my build.
I am currently in contact with the maintainer of the ClojureScript library which wraps a JavaScript library which has broken backwards compatibility, and I am working on fixing these issues myself, a great challenge considering I have about a month of ClojureScript experience at this point, but also a lot of fun.
My goal for this is to eventually end up on the App Store, use it myself, and maybe even make some money out of this. I do not expect this to ever become remotely profitable, but it is going to be a nice bullet point on my CV. So far I have a small fraction of the features I have planned, but I believe I have worked out most of the difficult parts to the point where I can quite efficiently extend the functionality on top of what I have built so far.
Bottom line of this post is, I like this stack a lot. It has not been smooth sailing all the way, and some of the tooling I use is still in quite early development, but I have yet to find anything that completely blocks me for more than an hour.
]]>The reason I feel more productive when working from home, and thus better in general, is my ability to easily get into the Zone™, the state of mind where focus on the task at hand trumps everything else. My objectively measured output in the form of commits can be easily gauged by just having a look at the Github activity histogram of my account, which shows that over the last month, the days I worked from home have seen almost a doubling of activity. This seems like a pretty obvious gain no one should be able to deny, but I am going to argue at this point that there is more to my job than just pumping out code.
Sure, one part of my just is producing features and fixing bugs, but software engineering has a huge social component many tend to forget about. During the days I am in the office, I talk to developers from other teams, catch up on the latest developments, discuss design decisions for upcoming features and projects, help out in situations that are not strictly my responsibility, and generally just trade in some of my personal productivity to bring up company productivity. I think it is important to make this division, because on some days I personally do not write a single line of code, but I enable others to sometimes save hours of their time, which arguably is the greater gain for the company. Building a Fortress of Solitude can be tempting to boost personal output, but it is not what makes a great engineer.
Even after coming to the conclusion that the gain in personal productivity might not be worth it when looking at the big picture, I still maintain that I want to work from home some of the time. Why is that? The gain here is personal happiness for me. My happiness is, at least to some degree, bound to my (perceived) personal productivity. When I track time in the evening, write my work journalSomething I have been experimenting with. More on that in a future post. But it’s basically just writing down what I did by the end of the day.
or talk about yesterday in the daily standup, I feel pressured to have produced something more or less tangible. But on some days, I do not really have anything I can say I produced. I do not think anyone actually minds, because I just say “I spent eight hours yesterday helping someone unbreak the builds”, but somewhere deep down it still feels wrong.
Another thought I had was that I am not nearly as closely monitored when working from home compared to the days I am coming in. This allows me to just wander off to the kitchen, the toilet, or the supermarket if I feel like I need a couple of minutes to think something over. This is considerably harder to do in the office, even though our open office is not quite an Orwellian surveillance state. Instead of taking these small breaks, or sometimes even power naps, I just try to soldier through while I am at the office, because that is what I am there for.
So what should you be doing? First of all, every situation is unique, and I cannot give any blanket advice. If you get the chance, I think you should definitely try working from home at some point. It does require some discipline, more if your home is actually busy with family or flat-mates. But do not take this opportunity for granted, and if your employer says that this is not working out from their perspective, you have no real alternative to accepting that fact.
Even when working from home, I make an effort to stay connected, which today means having Slack on all my devicesThis is in general a terrible idea, don’t do this. I will write about information diets at some point.
and being transparent about what I am doing. This is an important step when building trust so your employer does not feel like you are just abusing this privilege to catch up on Westworld. I also only work from home when I know I am not required in any meetings, and I make a point of talking to everyone I need to talk to the day before, so I can tie up all loose ends.
If you have been here before, you might have noticed that things have changed a fair bit. This is the new blog, reborn from the ashes, so to say. Though only because I felt like it was necessary to burn the old one. It’s now built using Hakyll instead of PelicanWhy switch, you ask? Shiny toys, that’s why.
, and it’s using Tufte CSS instead of my homegrown Jinja2 templates and CSS. I spent much more time on setting this up than I care to admit. The upshot is, I have new CSS without having to rewrite all my custom templates from scratch, and get to use these neat stylesheets.
One of my major pains with my old setup was the setup required to just write something quickly, due to the fact that I was maintaining my own styles and sources in different repositories, and also because Python requirements are notoriously janky if you’re not careful. This new system is handled entirely by stack, which makes things much easier.By the way, you can get the source of this blog over here, if you want to steal some code, or just look at it.
Initial setup takes quite a while, because I have to compile a whole load of dependencies, but it’s a fully automatic setup.
I have to admit, I was tempted to just switch to Medium during this whole process. But now that I have figured out how to build this whole thingRead: get it to compile
, I am quite happy with the results. Because Hakyll, just like Pelican, uses Markdown as source to generate HTML, I could easily port over my old posts once the dust had settled.
Following the blogpost linked above, I decided to write my own fuzzy finder in Haskell, because that is the language I am currently learning, and think has great potential. It is also a more interesting exercise than Project Euler.
Just in case you have not read the original post and do not know what a fuzzy finder is, it is a mechanism to filter and sort a list of strings by searching for substrings. It is often used in text editors like vim or Sublime Text, where you can just type in “accmanba” and they will open up account_management_backend.py
for you. As you can see, it makes switching between more than two files in a project much easier and faster.
Amjith wrote his finder using regular expressions, which are part of the Python standard library and can be compiled to be reasonably fast very easily. Sadly, Haskell has no implementation of regular expressions in its standard library, and I did not want to use third-party ones just for this. But as it turns out, we do not even need them, because the task is so simple and Haskell’s string manipulation capabilities are incredible, so that we can solve this by implementing the search algorithm ourselves and still achieve good performance.
So, let us get on to some actual code. The most interesting part here is the matching algorithm:
partOf :: Char -> [Char] -> Int -> (Bool, Int)
partOf _ [] r = (False, 0)
partOf c (x:xs) r | c == x = (True, r + 1)
| otherwise = partOf c xs $ r + 1
match :: [Char] -> ([Char], [Char]) -> (Bool, [Int])
match i s = match' i (snd s) []
where
match' :: [Char] -> [Char] -> [Int] -> (Bool, [Int])
match' [] _ r = (True, r)
match' (x:xs) s r | fst check = match' xs (drop used s) $ r ++ [used]
| otherwise = (False, r)
where
used = snd check
check = partOf x s 0
I know this is not really optimized for readability and especially not if you do not know Haskell, but stay with me, it is quite simple. Before we compare the input, we map all the possible solutions to lowercase and store them in a tuple like ("String", "string")
. This way, we can compare against the lowercase version and return the properly capitalized one later on. All we do then is check for each possible solution if each character of the input string appears in order in the solution. If so, we add it to a list along with some data, specifically the position of the first match and the distance between the first and last matched character in the solution. This is the same Amjith did for sorting. All this data gets returned in a big list of tuples with both versions of the solutions and the match data. It is not pretty, but it works.
The one function the module actually exports is fuzzyFinder
:
fuzzyFinder :: [Char] -> [[Char]] -> [[Char]]
fuzzyFinder input list = map fst . map snd $ sort combo
where
combo = zip (zip ((map sum . map tail) scores) (map head scores)) matches
scores = map snd $ map (match input) matches
matches = filter (fst . match input) $ prepInput list
All this function does is build the tuple list with the lowercase versions, toss it into the match function and filter out the correct versions from the matches that came back, ordering them by the match data in the same way Amjith did it. There are just a couple of extra lines that I omitted here because they are not important, but you can find the complete source on Github.
Now you might say, this cannot be fast, it is iterating through all this stuff and with big enough input it will take forever to present results. Let me show you this:
readL :: String -> [String]
readL a = read $ "[" ++ a ++ "]"
main = do args <- getArgs
if length args == 2 then do
list <- readFile $ args !! 1
print $ fuzzyFinder (args !! 0) $ readL list
else
putStrLn "Wrong number of args"
This is a small program that takes two arguments, the query string and a file path of a list of possible solutions and performs the actions outlined above. Using a 2.2GHz Core 2 Duo, because I am using my laptop, and a 46K list containing over 5000 words (Thanks, Project Euler), this happens:
λ time ./interactive roro ../hEuler/022.input
["ROBERTO","RODRIGO","ROSARIO","GREGORIO","RIGOBERTO"]
./interactive roro ../hEuler/022.input 0.08s user 0.00s system 96% cpu 0.079 total
The execution time goes up to 0.09 seconds when printing out really many names, but that is caused by the fact that we have to print out 100 lines or more to the console, which also takes time. But in my opinion, this is more than fast enough for auto-completion, which is the main use for fuzzy finders.
So overall, I am really happy with how this turned out. I was able to write this on one morning despite still learning the language. It is reasonably concise despite the fact that I did not use regular expressions but searched the strings manually and also quite fast.
]]>One of the most important things in your life is your health. As programmers, and other professionals who mainly work at a desk, staying healthy can be surprisingly difficult, even though sitting all day seems not particularly demanding. Some of this risks can be avoided by consciously changing habits, but often the right equipment can also play a big role. You really should not skimp when it comes to maintaining your health. I will discuss some free, and also some non-free ways that can help you with that.
There are several big health risks that programmers are affected by. The first one is repetitive strain injury, which can happen to anyone and can completely stop you from coding if it gets too bad. The easiest way of preventing RSI is the right setup. You want your desk to be at the right height, so your elbows are at a right angle. Depending on the keyboard, a wrist rest can also be helpful. As a rule of thumb, if your hands or arms start hurting after several minutes of continuous typing, there’s something wrong.
No matter what you do, doing things as unnatural as typing for long periods of time is harmful to your body, like almost anything. So make sure to take regular breaks, and also do some stretches for your hands and arms.
You can also try using alternative keyboard layouts, although this is quite a big step. I switched to using colemak, which is still relatively close to QWERTY, on all my machines early this year, and it has helped me tremendously, but it also took me about 10 weeks using it full-time to get back to my own typing speed, and regain muscle memory for vim. Another popular alternative is dvorak, which is more widely supported (read: on Windows without extra software), but radically differs from QWERTY.
If you want to go all out, there are also various kinds of ergonomical keyboards, which some people swear by. I haven’t had any personal experience with these, but I feel like being able to e.g. have your hands wider apart can make a big difference in your wrists, especially if you are typing for 6-8 hours a day.
Another problem many office workers have are posture and back problems, which can lead to chronic pain. Make sure your whole working environment is set up in a way that does not slowly kill you.
A relativaly cheap solution I value very much are monitor arms, which are key to get both your monitor high enough to not to have look down at it and also your desk low enough to not strain your elbows and wrists while typing. An added bonus is the free space under the monitor. I personally prefer the desk mounted ones for flexibility reasons, and they also allow you to run the cables behind them, but wall mounting is also an option that can come in handy.
Another big part of your setup is your chair. Many office chairs are bad for your posture and do not support your lower back properly, encouraging you to lounge instead of sitting upright. Good chairs can be very expensive, the KAB Executive retails for way above $1000, but I’d go as far and say it is worth that. I do not believe in saving money when it comes to my health. Also it is going to last a decade or more without problems. But you might not have this kind of money to spend on a chair, which I also fully understand. All I want is to make you think about how you sit in your chair, especially for long periods of intense concentration.
Another option can be a standing desk. There are many different ways of doing things, fixed height standing desks, manually variable desks and of course also electrically operated ones. In addition to that, there are also mini add-on desks that you place on an existing desk. Standing makes it easier to keep a proper posture (and also burns some additional calories), but standing all day long is also bad for your back, so the proper mix between sitting and standing is important.
There is one more point, and it is perhaps the most important one I have to make: whatever you do, as an office worker on a desk, it is extremely important to take regular breaks, get up (or sit down), stretch a bit, go for a short walk. I would reccomend intervals of at most once per hour, better twice. This is important for your cardiovascular system (i.e. your heart), and can also help you think. Regular exercise of any kind is also something everyone should do. Go to the gym, running or ride a bicyle. Do something that makes you sweat two or three times a week, and your overall quality of life will skyrocket.
]]>If you compare Emacs and vim, you will see that vim’s modal philosophy is the clearly superiour one, but you will also find that vim is somewhat limited. It works quite well out of the box, and my current .vimrc
is about 300 lines in size, most of that just setting options, defining shortcuts and loading plugins.
VimL tends to work for these things, but as soon as you start to change default behaviour and writing your own functions, it all falls apart. Vim does not know anything about its own state, there is no proper programmatical way to query it, and VimL as a language is extremely clunky and difficult to debug once you surpass simpe set
statements. This shows for example in the built-in help, that consists of plain text files that display the default configuration.
Emacs on the other hand is pretty much just a Lisp interpreter with a simple text editor written in said Lisp running on top of it. Because the configuration is done on the same level that the editor itself is written on, in the same language, you can essentially completely replace all default behaviour, and emacs has a great framework to get the current state and configuration of the editor, because everything is just saved into variables. There is this funny quote about Emacs being a great operating system lacking a decent editor, and I find that it is pretty accurate.
Enter Spacemacs. Spacemacs is an Emacs distribution, meaning it is a prepackaged set of packages and configs that are supposed to more or less implement vim’s behaviour on the emacs platform and integrate well without having to write several thousand lines of Elisp yourself, which is what you might actually need, depending on how much of vim you want in Emacs.
Spacemacs got reccomended to me first by Kritzcreek some time ago, and ever since I have been keeping an eye on it. I generally like to build my own configs and only stealing small parts from other people, but with default emacs being so different from the modal vim-like experience I want, it would take months to even come close to my vim setup. Spacemacs on the other hand comes with mostly sane defaults, and my overall custom config on top of its defaults is currently less that 100 lines of Elisp. I also want to mention Aaron Bieber’s talk about evil-mode for finally pushing me over the line and investing a weekend into learning enough emacs to be productive.
I am currently still in the progress of building my own emacs config from scratch, recreating the parts of Spacemacs I actually use, but I am not sure if I will ever get it to a point where it is even remotely as polished. Another factor of course is that I do not really want to waste too much time on this, and at least right now, Spacemacs does what I need it to do. In the end, you always have to think about if that extra bit of more custom behaviour is worth the time-investment to get it to work.
]]>Static unittests have one major flaw: The programmer has to write them. Any function working on a list is usually tested using some small examples and an empty list to make sure nothing breaks. This usually means you have to write 3-5 test cases per function to cover basic functionality. Not only is this a lot of work, it is also error-prone, because some cases are easily forgotten, especially because the programmer writing the tests for his code has a set of preconceived notions about how his code might be used.
But when writing any meaningful software, you usually have to deal with all kinds of data from outside, and a lot of that data is to be mistrusted. There might be changes in APIs, software errors somewhere else, or even a malicious attacker trying to breach your system, and all that data could look anything but the way you expect it to. This is why unittests should be used to employ a contract that the tested code has to follow, and fuzz testing can help you do just that.
Haskell with its amazing type system has a very nice way to do this. Have a look at this test:
describe "the finder" $
it "only returns proper superstrings of the search term" $
forAll nonNull $ \x -> property $
\y -> let rv = finder x y
in rv `shouldBe` filter (isSubsequenceOf x) rv
This code is part of a fuzzy finder I am toying with, and it ensures that a basic fact about it is always true, the results should always be proper superstrings of the search term, and the search term in turn a proper substring of all results. I could write a bunch of test cases including input like "abcdef"
or "123"
and supply a list of possible results, then evaluate myself what the result should look like and just assert that the results are equal.
But that would not only result in a considerable increase in code, easily quadrupling the code above, it would also not cover all possibilities. Haskell’s type system ensures that the finder only gets used with strings and lists of strings, but strings can have many forms. Because all of the input is coming from the user, there might be punctuation in there, spaces, maybe escaped, maybe Unicode symbols, who knows. That is why I instruct the test suite to just generate random strings and lists of strings to use (see the forAll nonNull
) and verify the property independently. This runs a default of 100 different sets of input, many of which are complicated messes of whitespace, backslashes and other special characters. If any of these break the test, the suite tells me which one, so I can find out what is the problem with a set of input and whether the contract I employed is incorrect, or the function tested does not follow the contract.
The current actual fuzzy finder uses six different property contracts that it needs to satisfy, each four to six lines in size. Testing all of these manually would lead to a gigantic test suite, and modifying the parameters of the input would be a huge hassle. With fuzz testing, I can just tweak the nonNull
generator to use different rules to generate input, like limiting it to a certain character range or string length.
The bottom line here is, if your unittests are handwritten, you most likely cannot be sure your code is behaving in production. There is almost no way you can think of all the possible ways your code is going to be used, and even if you could, the amount of testing code would not be feasible.
]]>