About Apple, USB C and standards

I’ve been thinking about the recent, apparently insane product releases from Apple – an iPhone that doesn’t have a headphone jack although a significant usage case for an iPhone is to play music from iTunes, a Macbook that has only one port, for both charging and data, and that port is basically incompatible with the rest of IT industry unless you use adapters, and a Macbook Pro that has only those incompatible ports, has less battery capacity, doesn’t have an SD card slot although its supposedly main target user group are the creative professionals, like photographers and videographers, who use SD cards to transfer images and video from their cameras when they are in the field and don’t have a cable with them.

To add insult to injury, all those products are more expensive than the previous, more functional generation.

I tried to think of an explanation, and I came up with several possible ones. For instance, although Apple pays formal lip service to the creative professionals, they don’t really make that much money from those. When Apple actually did make most of its money from creative professionals, somewhere in the 1990s, they were almost bankrupt and Microsoft had to rescue them by buying half a billion dollars of non-voting shares, and Steve Jobs was re-instated as iCEO (interim-CEO, which is the likely cause of him deciding to i-prefix all the product names). They then started to market to a wider audience of young hipsters, students and wealthy douchebags (as well as those who wanted to be perceived as such), and soon they started to drown in green. Yes, they continued to make products intended for the professionals, but those brought them increasingly smaller proportion of their overall earnings, and were deprioritized by the board, which is basically interested only in the bottom line. And it is only logical – if hipsters who buy iPhones bring you 99% of your money, you will try your best to make them happy and come back for more. The 1% earnings you get from the professional photographers and video editors are, essentially, a rounding error. You could lose them and not even notice. As a result, the Mac Pro got updated with ever decreasing frequency and was eventually abandoned by the professional market which is highly competitive and doesn’t have the time to waste on half a decade obsolete underperforming and overpriced products.

Keeping the hipsters happy, however, is a problem, because they want “innovation”, they want “style”, they basically want the aura of specialness they will appropriate from their gadget, since their own personality is a bland facsimile of the current trends. They are not special, they are not innovative, they are not interesting and they are not cool, but they want things that are supposed to be all that, so that they can adorn themselves with those things and live in the illusion that their existence has meaning.

So, how do you make a special smartphone, when every company out there has something that has all kinds of perfectly functional devices, within the constraint of modern technology? They have CPU and GPU that are slammed right against the wall of the thermal design, they have superfluous amounts of memory and storage, excellent screens… and there’s nothing else you can add to such a device, essentially, unless there’s a serious breakthrough in AI, and those gadgets become actually smart, in which case they will tell you what to do, instead of the other way around. So, facing the desperate need to appear innovative, and at the time facing the constraints of modern technology which defines what you can actually do, you start “inventing” gimmicky “features” such as the removal of the headphone jack and USB A sockets, and you make a second screen on the keyboard that draws a custom row of touch-sensitive icons.

And apparently, it works, as far as the corporate bottom line is concerned. The professionals noise their displeasure on YouTube, but the hipsters are apparently gobbling it all up, this stuff is selling like hot cakes. The problem is, the aura of coolness of Apple products stems from the fact that the professionals and the really cool people used them, and the hipsters wanted to emulate the cool people by appropriating their appearance, if not the essence. If the cool people migrate to something else, and it becomes a pattern for the hipsters to emulate, Apple will experience the fate of IBM. Remember PS/2? IBM decided it’s the market leader and everybody will gobble up whatever they make, so they made a PS/2 series of computers with a closed, proprietary “microchannel” bus, trying to discourage 3rd party clones. What happened is that people said “screw you”, and IBM lost all significance in the PC market, had to close huge parts of its business and eventually went out of the retail PC business altogether. And it’s not that PS/2 machines were bad. Huge parts of the PC industry standard were adopted from it – the VGA graphics, the mouse and keyboard ports, the keyboard layout, the 3.5” floppy standard, plus all kinds of stuff I probably forgot about. None of it helped it avoid the fate of the dinosaurs, because it attempted to blackmail and corner the marketplace, and the marketplace took notice and reacted accordingly.

People like standardized equipment. They like having only one standard for the power socket, so that you can plug any electrical appliance and it will work. The fact that the power socket can probably be designed as better, smaller and cooler is irrelevant. The most important thing about it is that it is standard, and you can plug everything everywhere. USB type A is the digital equivalent of a power socket. It replaced removable media, such as floppy and CD discs, with USB thumb drives, which can be plugged into any computer. Also, keyboards, mice, printers, cameras, phones, tablets, they all plug into the USB socket, and are universally recognized, so that everything works everywhere. Today, a device without a USB port is a device that cannot exchange massive amounts of data via thumb drives. It exists on an island, unable to function effectively in a modern IT environment. It doesn’t matter that the USB socket is too big, or that it’s not reversible. Nobody cares. What’s important is that you can count on the fact that everybody has it. Had Apple only replaced the Thunderbolt 2 sockets with USB C sockets, and kept the USB A sockets in place, it would be a non-issue. However, this has a very good chance of becoming their microchannel. Yes, people are saying that the USB C is the future, and it’s only a matter of time before it’s adopted by everyone, but I disagree. The same was said before about FireWire and about Thunderbolt. Neither standard was widely adopted, because it proved more easy to just make the USB faster, than to mess with another standard which basically tries to introduce yet another port that will not work anywhere else. There’s a reason why it’s so difficult for the Anglo-Saxon countries to migrate from Imperial units to the SI. Once everybody uses a certain standard, the fact that it is universally intelligible is much more important than its elegance.

Recognize those ports? Yeah, me neither.

Recognize those ports? Yeah, me neither.

Yes, we once used the 5.25” and 3.5” floppy drives and we no longer do. We once used the CD and DVD drives and we no longer do. We once used the Centronics and RS-323 ports for printers and mice. We once used MFM, RLL, ESDI and SCSI hard disk controllers. We once used the ISA system bus and the AGP graphics slot. What used to be a standard no longer is. However, there are standards that are genuinely different, such as the UTP Ethernet connector, or the USB connector, or the headphone jack, or the Schuko power socket. USB and Ethernet and PDF and JPEG and HTML are some of the universal standards that make it possible for a person to own a Mac, because you can plug it into the same peripherals as any other computer. It makes the operating differences unimportant, because you can exchange files, you can use the same keyboard and mouse, you can use the same printer, you can plug into the same network. By removing those standard connections and ways to exchange data with the rest of the world, a Mac becomes an isolated device, a useless curiosity, like the very old computers you can’t really use today because you can no longer connect them to anything. Imagine what would have happened if Apple removed the USB when they first introduced FireWire, or Thunderbolt – “this new port is the future, you no longer need that old one”. Yeah. Do you think an Ethernet port is used because it’s elegant? It’s crap. The plastic latch is prone to failure or breakage, the connection isn’t always solid, the dust can get in and create problems – it’s basically crap. You know why everybody still uses it? Because everybody uses it.

An analogy with tech

I was thinking about the similarities between the groupthink in the political sphere and its equivalent in the consumer technology sphere, and it dawned to me that I could more easily explain the political conundrum if I illustrate the problems in the technological equivalent, which might be less emotionally charged, at least for some parts of the audience.

So, let’s see the stereotypes.

1. An iPhone user is a stupid sheep who blindly follows trends and will pay more money for an inferior product.

2. Android is for people who want to customize their device.

3. Android is for poor people who can’t afford an iPhone.

4. A Mac user is a stupid sheep who will buy the overpriced shiny toy because he’s so stupid even Windows are too complicated for him.

5. Windows machines are virus-ridden, unstable, blue-screen-of-death displaying boring gray box.

6. Mac is for creative people, Windows are for accountants.

7. Windows are for poor people who can’t afford a Mac.

8. Linux is for poor people who can’t afford Windows.

Need I go on?

Now, let’s go through the list.

1 and 2: There are many reasons why one might want an iPhone. One is because he really is too stupid to understand that there are alternatives. Another is because he’s too busy doing whatever is his day-job to fiddle with a device, and just wants something that works reliably. His day-job might be “astrophysicist” or “doctor”. He doesn’t have either will or time to fiddle with a phone or to install an alternative kernel. He just wants speed, reliability, good build quality and, occasionally, he wants to run very specialized apps that are available for it. Someone who will “customize” his phone is more likely than not to live in his mom’s basement, because that’s the profile that’s likely to waste time on non-productive shit like that. If you have things to do, you use the phone to make calls, to google something or to find your way around on a map. You’re too busy operating on people’s brains, designing a new rocket engine, analyzing data from the Kepler telescope or getting that call informing you how that million-dollar deal went through. If your phone is all you have to deal with in your life, you’re either a phone designer, or someone who has too much spare time on his hands.

3: Yes, in many cases people who opt for Android phones find iPhones to be too expensive. That might be because they are poor. On the other hand, they might just want to buy something good but affordable and not too fragile for their kids. Or, they might decide that the iPhone just isn’t worth the premium; it does basically the same thing as a much cheaper Android phone, so why would you overpay for the same functionality? Essentially, you may have several good options and once you’re satisfied with the fact that any of them will do a good job, you pick one based on both preference and estimate of cost-effectiveness.

4: Yes, there are people who buy a Mac because they find Windows too complicated (although, it is difficult for me to figure out how that is possible, since both systems are more-less equally trivial to master). On the other hand, there are people who will buy a Mac because Apple’s laptops have great battery life, great screen, excellent touchpad, or because they can run open source tools via macports or homebrew, allowing them to have access to the same toolkit they would have on Linux, but with better reliability, better battery life, less bugs, and with the ability to run Adobe apps. Those are excellent reasons, and it’s easy to understand why one would get a laptop from Apple, and in fact it might explain why Apple laptops are outselling everything on the market, and why they are especially popular with technology and science professionals, who certainly aren’t using them because they find Windows intimidatingly difficult. I, for instance, migrated to a Macbook Air from a Thinkpad running Linux five years ago, simply because it was thin, light, had a great battery, had an SSD, and one of the best displays on any laptop. Also, it ran Unix natively and I was so at home with Linux command-line tools I would have great difficulties re-organizing the things I do in a way that was doable on Windows. So, the options for me weren’t Windows or OS X, but OS X or Linux, and I couldn’t run Lightroom on Linux.

5: Windows machines exist in a wide range of price, capability and performance. Yes, there are the basic Windows boxes, both laptops and desktops, that are indeed quite cringe-worthy. Then again, I’m writing this on a i7-6700K PC, with very high-end components, and it’s incredibly fast, it’s as reliable as a toaster, and my monitor has the same LG-Philips matrix as a 27” iMac, only with matte coating, so I get no reflections from the window beside me. Essentially, it’s the performance equivalent of a 6-core Mac Pro, with a better graphics card, better cooling, and at half the price. Basically, it’s as far from being a bland beige box as you can imagine. In most things, it’s equivalent to an OS X machine, except for the fact that I have to run a virtualized Linux machine in order to get the Unix functionality that I need. That’s less annoying on a desktop than it would be on a laptop, but essentially, the reliability, ease of use, performance etc. are so similar between the two I don’t really care which one I use. I do have a mail archive manager that works only on the Mac, and that does determine my preference in part, because although I did write a proof-of-concept portable alternative in Java, I would hate to write and maintain something that already exists and works great, and I wouldn’t get paid for the work. I have better uses for my time, honestly. As for the viruses, I have a simple rule that had served me well so far: don’t click on stupid shit. As a result, I don’t get viruses. The last time I got a virus I was running Windows 98 or something, and it happened because I mistakenly clicked on something. I do use an antivirus, as a precaution, but honestly, if you’re having problems with viruses, you’re more likely having problems with porn sites and stupidity.

6: As for Mac being for creative people, I used Windows 3.1 for desktop publishing with Ventura Publisher software in the late 1980s, I use a Windows machine for photo editing and writing books, I even used Linux machines for photo editing and writing books. I can basically make anything work for me, and if I’m not counted as a creative person, nobody will meet the requirements. This thing about Macs and creative work is basically propaganda. Windows machines are used by some 95% of all computer users, which basically means they are used by both the most creative people and by most accountants. It’s just that creative people tend to configure their machines differently, that’s all. A programmer will have different requirements than a photographer or a graphics designer.

7: Try configuring a dual-Xeon 24-core, 128GB PC workstation with two Titan X Pascal graphics cards and tell me it’s for poor people who can’t afford a Mac. I personally can afford a Mac laptop because it’s good, and I can’t afford a Mac desktop because it’s worse than my machine and for more money. Essentially, I can’t afford overpriced, underperforming shit of any kind.

8: Since Linux runs basically on all servers everywhere, and since Google uses it on all workstations for their developers, there are obviously good reasons for very rich people to use it in a production environment. If you’re a developer, a good Linux distro might be the best thing to have on your desktop in terms of getting things done efficiently, basically being able to use your desktop as a test-environment for the software you’re developing. Personally, I prefer having the Linux development and testing environment virtualized because Windows makes better use of my hardware, but it’s a matter of preference and I could very easily see myself running Linux on the hardware and doing everything from there if it became more convenient for some reason.

You can see how there are many reasons why someone might have a certain opinion, reasons that differ significantly from the stereotypes. One might use something because he’s too stupid to know better, and another person might use that same thing simply because it works better for his usage case. One person can have a certain political attitude because he’s stupid or evil, while another person can have a very similar position but on a far higher intellectual octave, because he knows much more than you do, has better insight, greater intelligence and, in the end, you might not have any arguments that could disprove his. So tread lightly. The leftists can’t fathom why anyone with IQ over 150 would vote for Trump or have political opinions in the right political spectrum; due to their stereotypical understanding of the opposition they are facing, they are simply unable to either comprehend it, or to argue against it, or do anything constructive about it whatsoever, which leaves them with the option of smearing fake blood on their faces and chanting slogans. This doesn’t differ greatly from the shock some people experience when they get to know an IT expert and a technology enthusiast who uses an iPhone. It’s not that the concept itself is unfathomable, it’s just that they painted themselves into a corner with their closed-minded stereotypes and inability to understand different positions and scenarios.

Sony A7II

As you already know if you follow me on g+, I got my new Sony A7II camera from ebay a few days ago.

I’ll spare you the detailed equipment review and just show the pictures I made with it so far:

dsc00370 dsc00645

dsc00227

dsc00676 dsc00688 dsc00821

dsc00913

I love it, it’s great.

How powerful are the modern nukes?

I wrote generally about nuclear weapons and strategy of their use, but not much about the weapons themselves.

When there’s talk about the yield of the modern nuclear weapons, people usually compare the first Hiroshima bomb with the yield of 15KT (thousands of tons of TNT explosive) and the Tsar bomba with the yield of 50MT (millions of tons of TNT explosive), probably wanting to show how much bigger and more destructive our modern bombs are.

Well, yes and no. You see, no modern nuclear weapon has that big of a yield. The more powerful weapons have been systematically phased out, for several reasons. First, because they are heavy, big and clumsy, and thus difficult to deliver. The Tsar bomba was so cumbersome, it was difficult to deliver by a plane, let alone a rocket. It was a prototype designed for showing off, not an actual weapon. Essentially, Americans detonated the Castle Bravo test (which was so poorly controlled the Americans shat themselves), and the Russians wanted to one-up them by detonating a bomb more than twice the size, and they too shat themselves when they saw the results. The Tsar bomba was actually intentionally reduced in yield from the design’s theoretical maximum of 100MT, so that is the actual top yield nuclear device that mankind can demonstrably make. The laymen who think that the purpose of nuclear weapons is to blow us all up, and that those bombs are designed by psychopaths, probably think that the goal of nuclear weapon design is to produce stronger weapons, but that is not the case. In fact, the yield of the deployed nuclear weapons has been steadily going down for the last several decades. The strongest fielded weapons were in the order of magnitude of 20MT, but they have all been phased out. Why is that, you will ask?

Well, one of the reasons is that one big warhead is easier to destroy by an anti-ballistic rocket, than ten smaller ones. Furthermore, several smaller weapons are much more effective at destroying a wider area than one bigger weapon. And last, there isn’t much of a difference between a 100KT and a 20MT bomb when it comes to destroying military targets. In fact, a spread of 100KT warheads will probably do a better job because it’s less sensitive to accuracy of the individual hit.

When it comes to civilian targets, they are really an afterthought. The designers of warheads are more concerned with destroying hardened bunkers, command centers, aircraft carrier battle groups and similar targets; nobody really gets his dick hard thinking about destroying a city of millions of people with nuclear weapons. If anything, it makes everybody ashamed of their work, and the weapon design is harder to justify if it doesn’t have a clear military purpose. The civilians are such soft targets in any case, that even the first Hiroshima bomb did a good enough job at causing immense suffering and death. It is much more difficult, as it proved, to sink a battleship of the Prinz Eugen class, than to turn living people into shadows on concrete.

There’s no real difference between American and Russian weapons designers in that regard. They have both been following similar design parameters, working more on miniaturization, reliability, maneuverability, accuracy and yield control, than yield magnitude, because after they both ascertained that they can make very big bombs as experimental devices, they started thinking about what is actually practical as a fielded weapon, and they came to the conclusion that the best thing you can have is a spread of 10 or so individually maneuverable, highly miniaturized and reliable bombs, launched from a stealthy mobile platform. They all initially fielded 20MT weapons, but later withdrew them as they saw no practical military use for them.

There are other reasons why a reduction in weapons size is preferable to yield. You see, a small weapon can be launched with a lighter, faster rocket. Such a rocket is smaller, so you can pack many more of them inside a submarine. A lighter rocket can achieve maximum speed more quickly, and it doesn’t stay long in its vulnerable, slow phase of flight. You can put it on a truck, or in a train, or on a cruise missile. They even made suitcase-sized bombs. Miniaturization has a huge number of advantages, and yield, other than bragging rights, has precious few.

So, what do Americans and Russians currently have fielded in operational form?

The Americans have Trident II SLBMs on Ohio class submarines, armed with 8-12 MIRVed warheads. The warheads can be either 475KT or 100KT, depending on which version is used. They also have the Minuteman III silo rockets, which have fallen into disrepair and are not really taken seriously due to their vulnerability. They carry 3 MIRVs with 170KT to 500KT yield, depending on the version.

The Russians have Sineva SLBMs on Delta IV class submarines, 4-8 MIRV warheads with 100KT yield each. They also have Bulava SLBMs on Borei class submarines, 6-10 MIRVs with 150KT yield each.

They also have mobile Topol-M launchers with single 800KT warheads, and mobile Yars launchers with 4+ MIRVs with 150-250KT yield.

They also have silo-based R-36 Voivoda (known in the West as SS-18 Satan), which are in a similar state of disrepair as the American Minuteman III counterparts, and are being phased out and replaced with Yars launchers and, when its testing finishes, the Sarmat heavy launcher. The payload for those are 10 MIRVs with 750KT yield.

The Russian nuclear weapons are almost all brand new, with modern design; they are tested, reliable and dependable. It’s an irony that people in America think that Russians only have Soviet legacy nukes, and that’s not the case at all; if anything, the Americans have legacy stuff from the seventies and eighties, and it’s questionable whether it all works and how well. It is telling that of all the Russian nuclear rockets, only one is as old as the newest of the American nuclear rockets, and it’s the one they are quickly phasing out because it’s unreliable. Just food for thought.

So, what does that tell us? First of all, when you make calculations about yield in nukemap, ignore everything with a yield over 800KT because for all intents and purposes it doesn’t exist. Second, always count on more than one hit, because you’re most likely getting hit by a MIRV spread. Third, if you live in a highly populated area that doesn’t double as a strategic military installation (such as a nuclear submarine shipyard), you are probably a very low priority target and nobody really has any wish to kill you, unless your politicians already killed millions of people in the enemy’s capital cities, in which cases the enemy got so mad they’re probably going to kill you all. The name of the game in nuclear strategy is “disarm”, not “commit worst genocide of all times”. The enemy’s civilian population will be hit only by the doomsday retaliatory strike of last resort. However, the infrastructure of the world will be a priority target – satellites, Internet, electricity, transportation. All electronics will be disabled either permanently or temporarily by EMP blasts. You will hear no news, and there will be no way to find out what happened, maybe forever. There will be no food refrigeration or transportation. There will soon be no modern medicine. Honestly, it would be less painful and terrifying if they did hit us with the 20MT warheads, then at least we could hope for quick evaporation. This way, the soldiers will be the lucky ones, they will die from bombs, while the rest of us die from waterborne bacteria and hunger.

Some photography stuff

I’m now going to write about something I usually don’t write much about, but which makes possible all the stuff that I publish online. Hardware.

Why I don’t write about it, well, because I just assume it implicitly. Computers, cameras, lenses, they are tools. If they work well, I don’t give much fuck about them. When they fail or become a pain in the ass, I have to think about them and do something. Such as now.

dsc01784

This is my main camera, Canon EOS 5d dSLR which I bought in 2006. I used it to record a huge number of photos, including majority of stuff used on a photo exhibition and in my commercial work (corporate and private websites).

img_0241

This is the second time the mirror fell off. The first time I glued it back with superglue. The second time I did the same, but I no longer have that much faith in the process. There’s a factory recall for it, and of course I could have it professionally serviced, but the problem is, it’s 10 years old. Technology did manage to advance significantly in the meantime and while worth fixing, it’s not worth keeping as my main camera. As in, I need a new camera body to put my Canon lenses on.

This is my secondary camera:

20141015_140532

It’s the Olympus E-PL1, a micro four-thirds body that I use to mount legacy Minolta lenses and macro extenders. It creates excellent images, almost on-par with the Canon 5D.

OLYMPUS DIGITAL CAMERA

The problem is, everything on the camera except the sensor is a pile of shit. It’s the most awkward, uncomfortable, unergonomic camera imaginable and despite great image quality it made photography a huge pain in the ass for me, especially since it’s usually my walkaround camera of choice, being small and light. It also doesn’t have a viewfinder so I can only take pictures holding it at arm’s length, like a phone. This doesn’t help with image stability. Also, you can’t see shit on the screen during strong sunlight, which happens to be when there’s best light for translucent motives. Essentially, I put it on the floor, guesstimate the focus and pray. That’s not how you’re supposed to do things. On a tripod, of course, it’s great, but having the smallest possible camera and then taking a tripod along that’s several times the weight and bulk of your proper camera, that doesn’t make much sense.

The Olympus has one absolutely great quality: it shows you exactly what the sensor sees, including 100% magnification, which is great for manually focusing with precision that’s completely beyond any autofocus system that I’ve tried. This means you can really nail the sharpness, if you work slowly of course. Which I do. Also, you can overlay the live histogram on the display, very accurately nailing exposure without retrying. Also, it has in-body image stabilization, which is incredibly helpful for hand-held work in low light, which is about 50% of everything I do. Those things are so helpful that I’ve found myself neglecting the Canon for the Olympus, with the result of not being able to use all the Canon lenses that I have.

img_0975

What can Canon do, that Olympus can not? This.

As a result, I figured out that my ideal camera would be something that has live view with the articulated screen (so that I can put it in the grass, and tilt the screen upwards to see what I’m doing), a quick high-resolution viewfinder, 35mm sensor with the same image quality as I have on the Canon, to be small enough not to be bothersome when I take it with me for a long walk, it needs to have sensor-based image stabilization (because none of my lenses have IS) and it has to be able to work with lenses adapted from both Canon EF and Minolta MC/MD mounts, so that I could use everything I already have because it’s good and I don’t feel like wasting money on duplicating optics.

As it turns out, such a camera exists: all three Sony A7 second-generation models fit all my requirements. Since A7S II is specialized for video (which I don’t shoot) and too expensive, and A7R II is too expensive, I decided to get the A7 II. Advantages: not too expensive, and has the same goodies as the other two, minus the super-fancy viewfinder and the super-fancy backside-illuminated ultra high-res sensor from the R model. I decided I can live without those for the benefit of costing half the money and being identical in all other regards. As for the resolution, I shoot at 12-13 MP and from that I routinely make B2 sized prints. 24MP will be just fine. Yes, I’m competent enough to actually utilize the R-model’s 42MP sensor, but for the difference in price I can get all the lenses I would want, and those are worth more to me.

I recently bought a used Sony R1 for my kid, and I tried taking pictures with it myself. The image quality, when used properly, is so similar to Canon 5D that it looks like two shots taken with the same camera and different lenses. It has gorgeous image quality on low ISO, paired with a lens that is excellent when stopped down properly.

dsc01772

The problem is, it’s a perfect camera for slow tripod work and shitty camera for hand-held work, especially in low light. No image stabilization of any kind, very noisy above base ISO, and very difficult to focus accurately due to shitty AF and very low resolution viewfinder and display without any indication of in-focus areas. Also, the lens is not sharp on close focus, especially wide open, which is how I use it for more than half of my photography. Also, it only has that one lens, so no macro extenders, and no extreme wide angle. Not good for me. Great for my kid to learn photography, though, so it’s still a big win.

Regarding lenses, I have a love-hate relationship with the “mid-range zooms”. I had several excellent ones – Minolta MD 35-70mm f/3.5 and Zuiko Digital 14-54mm f/2.8-3.5, for instance, and also Canon EF 35-70mm f/3.5-4.5. The last one isn’t really appreciated but I made most of my closeup and landscape shots with it.

safrani1

EF 35-70mm f/3.5-4.5 “shit lens” with macro extension tubes

It’s a pathetic-looking creaky plasticky thingy that makes jaw-dropping pictures if you know how to use it. So it’s obvious why I like this type of lenses. The reason why I hate them is that when I have one, I tend not to take it off my camera because it’s convenient, and so I end up using it in places where it sucks and it degrades the quality of my work. Especially when I use the 14-42mm f/3.5-5.6 on the E-PL1, which is optically my worst lens and is just fucking terrible in all ways but one: it’s small and light, and so I end up using it instead of proper, albeit heavy pieces of glass.

OLYMPUS DIGITAL CAMERA

So, of course, I got a mid-range kit zoom for the Sony, the 28-70mm thing that everybody says is soft and has low contrast. The problem is, the alternative is the Zeiss 24-70mm which has better contrast and it looks nicer, but most copies seem to be soft and can be actually worse than the cheaper kit zoom. So I said, OK, let’s get the plasticky cheap one because it was almost free (the kit with the lens was barely more expensive than the body alone), and I can use it on a tripod stopped down to f/13-16 which is where I take most of my tripod photography on 35mm, and it better be tack sharp there. But if it’s any good, I’ll have one light walkaround autofocus lens if I just want to have something better than my phone with me and not carry several kilos of gear, and if I want sharp, I have lenses that do just that. The 24-70mm Zeiss, it’s simply too expensive for me to buy without testing the specific copy extensively prior to purchase; it’s a thousand-euro lens, for fuck’s sake. For that kind of money, it better give blowjobs and make great coffee. But according to all reports, it’s optically sub-par, and if I want a really sharp one in that range, I’ll probably try something from the Sigma’s Art series, like the 24-105mm. If I want light, I’ll have the plasticky cheap one, and if I want something that’s both light and good, I’ll get the 35mm f/2.8 Zeiss. That one is almost pocketable, it’s really sharp, and it can still cut the depth of field well enough for my uses. Also, 35mm is probably my favorite focal length for landscapes, because anything wider usually grabs telephone poles and similar stuff that I want to omit in normal situations, and is still wide enough to make sense. I also love how the other Sony-Zeiss prime, the 55mm f/1.8, draws, but that one is more of a specialist tool. It does portraits and closeups excellently, but for those I would actually prefer the 90mm f/2.8 macro. For walkaround photography, the 55mm is too long; my walkaround lenses are usually the 17-40mm or the 15mm fisheye, and it wouldn’t surprise me a bit if I end up using the fisheye as the mainstay on the Sony.

img_9878

Someone will say, what about the lack of autofocus on the adapted lenses? Honestly, I usually work slowly and turn the damn thing off anyway in most cases. The only thing for which I really prefer autofocus are the portraits, because with manual focusing it’s really difficult to get the eyes critically sharp on as shallow depth of field as I prefer it to be, because the model’s breathing motions are usually all it takes to bring the iris out of focus. Accurate focus confirmation, however, might be enough for me to get accurate focus with MF lenses.

p4283424-crop

Olympus E-PL1 with Minolta MD 50mm f/1.7, manual focus

I anticipate the question: why do you whine so fucking much when it’s obvious that you manage to make similarly good pictures with any kind of equipment you get your hands on? Because the process of making the pictures is supposed to be fun. If something is painful to use, I will stop using it. Some pieces of equipment had the result of making me turn away from photography almost completely. Equipment is important in the sense that it can either feel nice and wonderful to work with, or it can feel like having your nails pulled with rusty pilers. I tried both, I don’t have to tell you what kind I prefer.

Anyway, it’s just me thinking out loud about it. You’ll see the pictures when the actual camera arrives. If the Americans don’t cause a nuclear war first.