I’m starting to learn iOS app development, so I wanted to learn the basics of the latest and greatest for iOS UI development: Storyboards. A quick search online lead me to this great tutorial at raywenderlich.com.
I found it was short, to the point and covered the basics well. It got me introduced to the interface in XCode and the primary concepts of Storyboards. I would recommend it if you’re jumping into iOS development.
Falsehood articles are a form of commentary on a particular subject, and are appreciated by the developer community at large for their effectiveness and terseness. They’re a convenient written form to approach an unfamiliar domain by dispelling myths, point out common pitfalls, show inconsistencies and subtleties.
In a sense, Falsehood articles are a suite of wordy unit-tests covering extensive edge-cases provided by real-world usage.
I’m personally a fan of the date and time falsehoods.
What I want to talk about is something I see in a lot of code that drives me up the wall: identifiers that are too damn long.
Yes, names can be too short. Back when C only required external identifiers to be unique up to the first six characters; auto-complete hadn’t been invented; and every keypress had to be made uphill, in the snow, both ways; it was a problem. I’m glad we now live in a futuristic utopia where keyboard farts like p, idxcrpm, and x3 are rare.
But the pendulum has swung too far in the other direction. We shouldn’t be Hemingway, but we don’t need to be Tennessee Williams either. Very long names also hurt the clarity of the code where they are used. Giant identifiers dwarf the operations you’re performing on them, are hard to visually scan, and force extra line breaks which interrupt the flow of the code.
While the algorithmic part of programming is a science, writing readable, easily understood code is an art.
My router is dying. How do I know? It keeps dropping my wireless connection. I’ll be away from my home server and, when my router goes down for a few minutes, it loses its wireless connection. Then a bug in OS X results in my server failing to reconnect to the network, leaving it offline until I can get to it.
Peter Bright at ArsTechnica has the detailed and fascinating story on how Microsoft came to have a single kernel for all Windows devices: OneCore. So far, Microsoft is the first in the consumer operating system space1 to achieve this feat:
Microsoft can now credibly speak of having one operating system (with Windows 10 as its most familiar branding) that can span hardware from little embedded Internet of Things devices to games consoles to PCs to cloud-scale server farms. At its heart is a slimmed down, modularized operating system dubbed OneCore. Windows 10, Windows Server, Xbox 10, Windows 10 Mobile, Windows 10 IoT, and the HoloLens operating system are all built on this same foundation.
It took a long time to reach this point. Along the way, Microsoft built three major operating system families, killed two of them off, and even reorganized the entire company. In the end, all that action was necessary in order to make building a single operating system practical. Apple and Google will probably do something similar with their various operating systems, but Microsoft has managed it first.
This is an incredible feat, particularly that this was accomplished while still maintaining Microsoft’s sometimes extreme levels of backwards compatibility.
OneCore comes with initial benefits for Microsoft and third party developers; however, consumers will reap the benefits indirectly in the long term:
Perhaps the biggest gains, for both developers and users, come from unexpected new platforms. When the first work on MinWin was started, nobody could have imagined that one day HoloLens might exist. But with the OneCore platform, adding support for this new hardware becomes relatively straightforward.
The past decade has been an incredible period of technological innovation, with the next decade looking just as bright as all of the technology companies fire on all cylinders. I can’t wait to see and be a part of what comes next.
Yes, technically Linux was first — by a long shot. Let’s be honest though: Linux has negligible market-share and impact on the consumer desktop market; its dominance is on server and, arguably, embedded systems. ↩
…we put these servers into a scream test environment where they still have access to our corpnet, but users are now limited from doing just about anything on the machines except logging in. If someone does log onto a machine to try something (like running tests, installing other software, etc.), a dialog pops up telling them that this machine is in a scream test and they need to contact my lab managers if they want access back for this server, otherwise it will be retired in so many days. We usually put a set of servers in a scream test for 2-4 weeks. Some people on the team will scream profusely and we are happy about that.
Programming Sucks does an impeccable, and hilarious, job of describing the parts of programming that, well, suck:
Every friend I have with a job that involves picking up something heavier than a laptop more than twice a week eventually finds a way to slip something like this into conversation: “Bro, you don’t work hard. I just worked a 4700-hour week digging a tunnel under Mordor with a screwdriver.”
They have a point. Mordor sucks, and it’s certainly more physically taxing to dig a tunnel than poke at a keyboard unless you’re an ant. But, for the sake of the argument, can we agree that stress and insanity are bad things? Awesome. Welcome to programming. […]
In particular, this is one metaphor that rings far too true:
…the bridge was designed as a suspension bridge, but nobody actually knew how to build a suspension bridge, so they got halfway through it and then just added extra support columns to keep the thing standing, but they left the suspension cables because they’re still sort of holding up parts of the bridge. Nobody knows which parts, but everybody’s pretty sure they’re important parts. […]
Much of programming is managing insanity. Some of us are insane enough to love doing it.
In C# it can be sometimes be confusing to know what exceptions to throw and which to catch. The ever insightful Eric Lippert has a great way of categorizing exceptions, leading to the simple answer: only catch the vexing exceptions.