Tag Archives: technology

More VR Demos

More demos of VR for friends, this time with Dave and Kathie, separately with Troy and (updated in December 2021) with Clay:

See also past VR demos with Martha and Dave and with Glenn, Michele and Seaerra:

 

and my original demo of VR experiences with me and Darlene:

 

Also tagged | Leave a comment

Virtual Reality Is…

Virtual reality is… freakin’ amazing.  (At least with the new, very high resolution HP Reverb G2… see my hardware details below.)

Let me try some words first…

It’s a truly astounding and engrossing experience – an incredible, brain-fooling trick: you put on the headset and immediately you feel like you are physically somewhere else.  Rationally you know that you’re still sitting or standing in your room at home, but as you look around and up and down and see and hear this entirely different environment, rendered perfectly in sync with your head movements, it’s utterly convincing that you’re somewhere else.  Maybe you’re standing on top of Mt. Everest or hanging in space over the Earth.  It’s not at all like looking at a screen, or even a 3D movie.  It’s like being somewhere else.

Hanging in space in VR

Yes, you’re wearing this contraption on your head.  Yes, you know you can just remove the headset and see again the room you’re really in.  And yet, part of your brain is fooled.  You hold up your hands in front of your face and you see a virtual set of hands and they move and turn in sync with your physical hands. Just standing and turning around makes you feel like you’re there but then you take some steps, some actual physical steps in the real world – and you find yourself moving in this virtual world.  It suddenly becomes all the more convincing and all the more of your brain is fooled.

You crouch down and look around and your perspective changes to match.  You reach down and “pick up” an object off the virtual floor.  You can’t feel it but you can turn it over in your hands, and can set it on a table or throw it across the virtual space with a simple, natural motion. You step to the edge of a balcony and look over the side and feel the threat of vertigo.  You turn away and approach a door, reaching out for the handle and physically pull the door open, looking into the next room.  It’s dark as you step inside, so you take out your flashlight and shine it left and right into the corners of the room, trying not to be caught by surprise.

You see a zombie wake and turn towards you, moaning and shambling towards you.  He’s just a dozen virtual feet away as you grab your pistol and raise it to fire – discovering you’re out of ammo.  Unlike any game played on a flat screen, you feel like you’re actually in the same space with this menace bearing down on you.  You have to resist the instinct to physically back away – or to turn and run in panic, yanking out the cord between your headset and computer.  Instead you hold your ground, reminding yourself that this creature so clearly coming at you isn’t real.  You eject the empty magazine and physically reach over your shoulder with your other hand to grab a new clip out of your virtual backpack, slip it into the gun, pull back on the slide – not button presses mind you, but physical hand gestures – and then quickly aim (actually raising your arm, no thumbstick or mouse movement) and let off several shots at the zombie to fell it.  And yeah, don’t be surprised to feel an elevated pulse or quickened breath after dealing with an intense scene.  The immersion is just crazy amazing.   And it’s really easy to forget yourself and try to lean on a bannister or a table and get a sudden rude reminder that there’s nothing really there to support you.

Of all your senses, most notably touch is missing.  You can’t feel the wall or door that blocks your way, objects that you pick up have no weight to them and what you always feel underfoot is the familiar floor of your home.  Each of these do break the illusion, remind you that you’re in a simulation.  But just sight, sound and natural physical gestures go a long ways to providing an amazing array of virtual experiences: from defending a castle against animated invaders by shooting arrows with a virtual bow to using a giant slingshot to aim and launch talking cannonballs for maximum destruction in a giant warehouse in Valve’s The Lab, sitting in the cockpit of various airplanes, flying over detailed renderings of any part of the entire world and through live simulated weather in Microsoft’s new Flight Simulator, exploring the frightening dystopian world of Half-Life: Alyx, fighting off walking zombies and leaping headcrabs, or simply walking around the Mos Eisley Cantina from Star Wars or the streets and shops of Stormwind from World of Warcraft.

Okay, how about some video…

I’ve been blown away by what it’s like to experience virtual reality right now and I’ve been itching to share the experience with friends over the past few months, but of course we can’t get together with the pandemic still going strong.  So I’ve put together a video to share some of the experiences – even though a video can’t come anywhere close to conveying what it’s like to actually be immersed in virtual reality.  It’s the same difference as actually physically being somewhere versus watching a video recording of someone else being there, but here goes anyway:


A collection of VR experiences

What I demonstrate in that video is the default Steam VR home environment, Google Earth VR, MS Flight Simulator, Valve’s The Lab, Half Life: Alyx, I Expect You to Die, Superhot VR, a World of Warcraft environment and a bunch of Steam VR environments: Enterprise bridge, Hobbit house, Mos Eisley Cantina, Sno Champ and a robot boxing ring.

There’s still so much more to try though: Asseto Corsa car racing sim, Star Trek Bridge Crew co-op simulation, The Room VR puzzle game, Elite Dangerous space sim, The Climb 2 extreme climbing game, Earthlight NASA Spacewalk sim, Keep Talking and Nobody Explodes cooperative challenge, DCS World WWII flight battle sim, Fruit Ninja VR, Star Wars Squadrons space sim, No Man’s Sky, Fallout 4 VR, Borderlands 2 VR, Detached puzzle sim in space, Down the Rabbit Hole puzzle adventure, some short Portal-based experiences, lots of interesting environments to explore, etc.

See more here about Microsoft’s new Flight Simulator: Flying All Over the Planet


Hardware: HP Reverb G2 VR headset and controllers

It was the promise of VR with Microsoft’s new Flight Simulator that pushed me to look into getting a proper PC-based VR system.  I’ve tried a couple of inexpensive headset shells in the past that let you use your phone’s display and it’s motion tracking ability to get a taste of VR, but that was nothing like this.

Comparison of headset image quality from a YouTube video

What I decided to buy was the new HP Reverb G2 based on its incredibly high resolution displays: 2160 x 2160 for each eye.  The result is that you can’t see the pixels or any sort of “looking through a screen door” effect and the image quality is much improved over older headsets, particularly in the center portion of your view.

That’s a challenge with all VR headsets though: the need for lenses to project the image so that your eyes are focusing on an image a meter or two away rather than the actual physical display that’s only a couple of centimeters away.  This lens introduces chromatic distortions that make the outer areas much less clear than the center portion.  In addition, to leave room for eyeglass wearers, the lenses and displays need to be mounted a little bit away and this in turn limits the apparent field of view.  You end up feeling like you’re looking through ski goggles or some very wide binoculars.  Different headsets will make different trade-offs in lens quality and field-of-view versus expense.  HP here has managed to get some very high resolution displays with decent lenses into a fairly inexpensive full kit.

The audio on this headset is great too, borrowing the speaker design from Valve’s more expensive Index headset: the speakers sit completely off of your ears, unlike headphones, adding to the overall comfort.

The Reverb’s controllers are a bit of a compromise though: in order to eliminate the need for externally mounted tracking modules, the Reverb G2 has four outward-facing cameras to track the position of the controllers.  It works okay, but it definitely has blind spots and can’t always tell where the controllers are.  Plus these Windows Mixed Reality-style controllers aren’t able to track individual finger positions like the Valve Index controllers.  Even better, the Index controllers strap around your palm, leaving you free to make grasping motions without having to hold on to the controllers.

I’m using the HP Reverb G2 with my 2019 16″ MacBook Pro and an external GPU case, first for a 5700 XT but now a 6800 XT, one of the latest high-powered graphics cards.  To use the Reverb, I have to boot into Windows (via Boot Camp, doesn’t work under Parallels) but it works well and provides access to all of the many Steam VR-compatible titles as well as Oculus/Rift-exclusive titles via Revive.

Other Hardware: Valve Index headset and controllers

I’ve now had the opportunity to try the older and more expensive Valve Index system.  It has a much lower resolution 1440 x 1600 for each eye and it definitely shows in comparison to the Reverb G2.  There’s a fairly obvious “screen door” effect where you can see the fine grid of pixels and things just aren’t as clear and crisp.  On the other hand, it can display a wider field of view than the Reverb G2, which is nice.  It was also interesting to discover that the Reverb G2 is more comfortable to wear than the older Valve Index.  It’s lighter and more secure on your head without being tight and it doesn’t cramp your nose.  One thing I prefer on the Valve Index though is the ability to adjust the field-of-view by turning a dial to move the lenses closer or farther away.

The Valve Index depends on a set of external base stations to track the controllers and help track the headset – which certainly isn’t as convenient as systems that do “inside out” tracking from the headset itself.  In addition, I wasn’t aware that the base stations emit a constant high frequency whine.  Initially this was very bothersome but this seems to have improved with a firmware update and isn’t noticeable anymore.

It’s actually possible to hack together a system for using the Valve Index controllers with other headsets (like my Reverb G2).  This can give you better hand tracking and full finger tracking but it’s pretty fiddly and requires a lot of setup – plus I’ve found that the training and calibration that allows this hack to work can get messed up and require reconfiguring things all over again.  At times I’ve gotten frustrated with trying to make it work and just gone back to the original Reverb G2 controllers, no hacks or calibration required.

Also tagged , | 1 Comment

Animating Old Photos

I just tried out this new deep learning tech, Deep Nostalgia, on a couple of old photos of our parents.  You give it an old still photo and it generates a “Live Photo” animation from it – pretty freaky:

  

 

Also tagged | Leave a comment

Flying All Over the Planet

I’ve been enjoying the new Microsoft Flight Simulator 2020 which features the ability to fly anywhere in the world with often amazing displays of detail and realism, including live weather effects.  If you haven’t seen it yet, here’s several written reviews (IGN, Polygon, Gamerant) and a few showcase videos:

Approaching Santa Cruz in a Daher TBM 930
(notice all the detail in the cockpit: sunlight, reflections in the windshield, etc.)

I’m running MS Flight Simulator on my 16″ 2019 MacBook Pro, an ultrawide LG monitor (3440 x 1440) and a Logitech G Pro Flight Yoke system with rudder pedals.  (A yoke is much easier to fly with than the keyboard controls.)  It’s a pretty immersive experience:

(For all of these YouTube videos, you’ll want to go full screen
and force the highest resolution, not just leave it on “auto”.)

Be aware that right now, as with all the newest graphic cards, every flight yoke and joystick is pretty difficult to find anywhere at normal retail prices ($165-ish) as the release of this game (and the pandemic) have driven them out-of-stock everywhere.

Even with the whole world available to explore, it’s particularly fun to fly around places that you know very well from the ground.  I’ve created a couple of longer videos of such flights – here’s a tour of the Santa Cruz area, including the boardwalk, downtown, Scotts Valley, Felton and north along the coast as far as Año Nuevo:

Some locations (like Santa Cruz above) benefit from detailed photogrammetry data providing lots of realistic detail. Other locations get carefully handcrafted buildings and objects (particularly at select airports), while the rest of the planet gets more generic textures and topographical information from satellite data and auto-generated details like trees and buildings. For example, the generic buildings populating the ghost town of Bodie are very out of place in my little tour of the Eastern Sierra – from Bishop to Mammoth and on to Mono Lake and Bodie:

Lots of folks are already making add-ons that you can drop in to enhance the rendering of a particular location or add a particular plane.  Here’s one great index of available add-ons for MS Flight Simulator.


The 16″ MacBook Pro (2.4GHz 8‑core Intel Core i9) can actually manage to run MS Flight Simulator on my ultrawide monitor with just the laptop’s built-in AMD Radeon Pro 5500M GPU but at lower Medium level settings. This game can be very CPU and even network intensive (the world does not fit on your hard drive) so the game can bog down even if your GPU has cycles to spare.

For higher quality settings, I’m using a Red Devil Radeon 5700 XT graphics card in an external GPU enclosure (connected via Thunderbolt) running MSFS 2020 on Windows 10 via Apple Boot Camp.  This setup allows for something between High-End and Ultra settings at 3440 x 1440 resolution.

Update (Jan 2021): I’m since been able to get one of the new, next generation GPU’s: an overclocked Radeon 6800 XT and I’m now able to run smoothly at even greater than “Ultra” settings from my 2019 MacBook Pro.  It looks fantastic!

Note that you’ll likely need to go through a bit of hassle to successfully configure these AMD graphics cards under Boot Camp.  See the egpu.io forums and bootcampdrivers.com for help. The Nvidia cards don’t require workarounds for Boot Camp but they’re not supported at all on macOS, whereas the AMD cards work under macOS without doing anything.

And now in virtual reality: I’ve also picked up a very high resolution HP Reverb G2 VR headset which makes for a truly amazing and engrossing experience.  With a proper VR headset, you get that incredible, brain-fooling trick of virtual reality immersion – of seeing and hearing only the virtual world around you, no matter which way you look.  With the Reverb’s incredibly high 4320 x 2160 resolution, I can’t run at the highest graphics settings (even with that new GPU) but it doesn’t matter – that feeling of immersion is so captivating – feeling like you’re actually sitting in the cockpit.  You’ve got to directly experience it though to believe it.  Watching a video recording shown on a fixed screen in front of you can never convey it.  I’ve written more about experiencing virtual reality here.

Also tagged , | 1 Comment

Model Y vs Model 3

I decided to go ahead and replace my Tesla Model 3 with a Model Y – both are Performance versions but both have the standard wheels and suspension for added clearance.

I’ll miss the Model 3 – the Model Y doesn’t feel quite the same.  The Model 3 is definitely more fun to drive just because of how it sits lower and feels more planted, like driving on rails.  However, the Model Y is easier to get in and out of, you have much more space for loading stuff and it’s much easier to load our two mountain bikes inside than it was with the Model 3.  Also, I like the integrated factory hitch option that’s at bumper level, as opposed to the aftermarket hitches for the 3 which had to mount underneath.

The Model Y’s suspension feels a bit “jouncy” (it could really use an air suspension option) and, somewhat strangely, under full launch acceleration the Performance Y doesn’t feel as stable as the Performance 3, presumably because of the taller stance.  It’s still nice to drive though – and it’s still a Tesla, with all the delight that implies.  I’m very happy with it but I would’ve been fine with keeping my Model 3 if only it had a full hatchback design for easy loading of bikes and gear, or if the Model Y had never come out.

Besides the obvious additional rear cargo space, the rear under floor and side pocket space and the frunk are all also larger.  There’s extra room in the rear seats and easier entry/exit all around due to the higher seating arrangement.  The Model 3 is of course more aerodynamic but the Model Y now has a heat pump and an inventive valve system to direct heat to/from the motor, battery or cabin as needed.  This gives the Model Y very similar range as compared to the Model 3 despite being larger, at least until this heat pump/valve system is carried over to the Model 3.

Update: Yup, as of late 2020 and along with some other changes and additions, the late-2020 Model 3 has gained added range from the new heat pump system.

If you’re interested in buying a Tesla, using someone’s referral link will give you a discount (the amount varies over the years) and grants redeemable credits to the person who referred you.  Here’s my Tesla referral link.

Click through for more comparison pictures:

     

Update (June 2021): I decided to get some sportier-looking wheels as well as upgrade the tires.  The stock Continentals don’t do that great on snow.  Here’s my Model Y now with Replika R241 alloy wheels (19×8.5) and the highly-rated Vredstein Quatrac Pro XL (255 R45-19) all season tires:

 

Also tagged , | Leave a comment

Tesla Powerwalls Installed

My two Tesla Powerwall 2‘s were installed last week and now I’m running on my own solar-generated power after dark!

You see, Powerwalls not only keep the lights on when the power goes out but they also let you automatically time shift energy daily to avoid using power from the grid at peak demand times – not something you get from a traditional backup generator.  Plus they don’t need any maintenance or fuel.

And yes, this does mean that now I will still have power for the whole house and, most importantly, running water the next time PG&E needs to shut down the power grid for fire safety.  Yay!

Avoiding Peak Demand Usage

It used to be that daily peak energy demand occurred through the mid to late afternoons but with the widespread adoption and installation of solar photovoltaic panels, that afternoon demand has evaporated and the peak demand now comes in the evenings.  As a result, power companies have been adjusting their rate schedules to reflect that, with the highest cost of energy running well after dark to 8 and 9 pm.

With battery storage, you not only get backup power for the whole house in the case of outages, but you can also automatically store energy generated during low demand periods of the day (including from your own solar panels) and automatically use that stored energy during the later peak hours, even after the sun goes down.  This means that your existing solar PV system ends up being even more effective and cost-efficient.

Time-shifting energy usage with battery storage works so well that Tesla and other companies have been actively deploying massive, utility level battery storage systems around the world, in place of traditional, expensive peaker plants. (Peaker plants are power plants whose primary purpose is to cover periods of high demand.)

Powerwall Configuration Options

I really like the Tesla app for configuring and monitoring your Powerwall and, if present, your solar PV system.  It continually displays the flow of power between your home, Powerwalls, solar panels and the power grid – in real time.

You can specify to keep the system in a “backup only” mode (keeping the batteries fully charged at all times), in a “self-powered” mode (where it stores any excess solar generated and uses it to power the home as much as possible each day), or in one of two time-based control modes where it forecasts your future energy usage and time-shifts your energy use and solar production to fit the peak, off-peak and shoulder periods of your particular electricity rate schedule.  And all of these modes operate under a “storm watch” feature that will automatically override the normal behavior of the Powerwall to prepare for forecasted storms or other events that may result in an outage.  All very cool!

Balanced vs Self-Powered on two cold, partly cloudy days with the heat pump cycling over much of each day.

I tried running in the “balanced” time-based mode for most of March but then switched to “self-powered” mode because, during the non-summer months (October – April), there is no peak rate and the difference between partial-peak and non-peak is only a couple of cents.  In “balanced” mode, the system would make a point of exporting any excess solar generation during the partial-peak period for credit rather than continuing to charge the Powerwalls.  This would mean it would be more likely to need grid power overnight.  Now in “self-powered” mode, the Powerwalls are charged more and usually able to handle the entire house load overnight – depending on the weather (solar production and house heating need).

Update (June 2020): Well, it’s not even summer yet and the Powerwalls are already letting the house run completely self-powered most days.  And by “house” I mean everything (central heating, water heater, cooktop/oven, washer/dryer, well & pressure pumps, septic system pumps) plus the cars (we’re both driving electric).  There were a couple of days that were a bit stormy and cold enough to want to heat the house and a couple days of heavier charging of one of the cars, but every other day required no power from the grid (day or night) – and yet the system still exported plenty of excess solar generation by the end of the day.  Having a couple of Powerwalls really does sort of double how much you get out of your existing solar panels.  (I’ve got a 9 kW solar system.)

During the summer months, when there’s a daily period of much higher peak pricing, I expected to make use of the time-based “balanced” mode to optimize how much credit I get for excess solar generation.  As it turns out, in the “balanced” mode the Powerwalls will switch to exporting solar power during the peak period even if the batteries aren’t yet full.  So I switched back to “self-powered” mode to let the batteries fully charge each day to be sure to have plenty for overnight usage.  They tend to fill up by early afternoon on sunny days and plenty of excess solar power gets exported at the peak rates anyway.  This excess solar generation during the spring to autumn months will still make up for the power I need from the grid over the winter when the house uses much more energy for heating.  (My panels were installed to optimize for summer peak rates – 75% of them are oriented to the west for summer afternoons.  Now that I have the Powerwalls, I almost wish I had optimized them more for the winter sun.)

If you’re interested in buying a Powerwall, using someone’s referral link will gain you (and the person who referred you) a small rebate.  Here’s my Tesla referral link.

Installation Issues

My installation by Tesla wasn’t without issues. Continue reading »

Also tagged , , | Leave a comment

The Unistellar eVscope

I received my eVscope from Unistellar in January of 2020 and I thought I would share my thoughts and experiences with it – particularly since there wasn’t a lot of info available when I ordered it in back in July of 2019.  I’ve since been adding to this page to provide additional information.

Overview

The Unistellar eVscope is quite different from a traditional optical telescope.  It’s a highly integrated and automated digital imaging telescope that enables you to easily find and view deep sky objects in color and detail that would not normally be perceptible to your eye looking through a normal optical telescope.  In addition, the eVscope is designed to let you easily participate in and contribute data to crowd-sourced “citizen science” projects.

The eVscope is a 4.5-inch Newtonian reflector that captures light on a highly sensitive, low noise Sony IMX224 color sensor while using a motorized alt-az tracking mount and autonomous field detection to automatically identify, align and continually track its view of the sky.  Integrated image-processing software takes and combines an on-going series of short exposures to generate an image in almost real time that brings out much of the very low light, color and detail that’s not visible to the human eye even when looking through a normal telescope. This view accumulates over just seconds and minutes and is displayed both in the telescope’s eyepiece (on an OLED display) as well as on a WiFi-connected smartphone.  The whole thing is self-powered via an integrated 9-10 hour rechargeable battery, fits into a large backpack and weighs just under 20 lbs. including the provided tripod.

In other words, it’s quite an impressive level of integration!

While you can of course outfit a normal telescope and tracking mount of your choosing with the necessary cameras, computer, tracking and image stacking software, WiFi connectivity, battery power, etc., you then also have to develop the expertise to use and troubleshoot this software – and it’s not trivial. To be clear, the eVscope is not really designed to be a sophisticated imaging tool or to compete with the results you can eventually get with lots of practice and expertise and many hours of capturing and processing images.  Instead, the eVscope is intended to let you very easily see and enjoy much more detail than you can with a normal, unaided telescope and it provides quick setup, ease of control from your smartphone, and a fun, real time viewing experience all wrapped up in a lovely, convenient little package.

It is however not cheap to integrate all these components into such a convenient package.  As such, I wouldn’t recommend it for someone wanting to dip their toe into astronomy on a small budget.  It’s pretty clear though that this makes for a wonderful tool for astronomy outreach programs anywhere and I’m really looking forward to sharing the experience with friends and their families.

Setup and Use

I recorded a video to demonstrate the ease of setting up and using the eVscope:

I forgot to record using the focus ring on the base of the scope, so perhaps I’ll add that later, but Unistellar provides a nice page detailing how to use it with the provided Bahtinov mask: How to use the Bahtinov mask?  (It’s great how the mask is integrated into the cap!)

With the earlier version of software (version 0.9), I did encounter a lot of bugs but most of these have already been addressed in version 1.0 (April 2020).  And now it’s performing even better in version 1.1 (October 2020).  They’ve also made many improvements over just this year and added functionality that makes the eVscope an even more fun and amazing experience to share with people.

The ease of setup and the speed with which you can get to viewing objects is great.  I really like the convenient size of the thing, including the integrated power supply and the optional padded backpack to carry and protect it.  The initial star alignment process is super fast (around 30 seconds) and it’s autonomous field detection system seems to do a great job of tracking the sky and dealing with field rotation over several hours.  I did find the views appear slightly soft (presumably from the effort to track, align and integrate frames over many minutes) but still quite enjoyable, and perhaps this will improve with future updates.  You can see some sample images below.  I should note that I haven’t tried collimating the scope yet, so I’ll update here when I get the chance.  Update (April 2020): I finally had both time and a bit of clear weather to collimate the telescope and it turns out it was off a little but now well aligned.  Over time I’ll try to replace all the images in the gallery with new ones post-collimation.  So far it’s just the last few in the gallery that were taken after collimation. (Whirlpool Galaxy, Ring Nebula, Eagle Nebula)

Another aspect of the very quick and easy setup is that it takes less than a minute to pull out the scope on a whim, stand it up on the open patio outside my bedroom, remove the cap, turn it on and dash back inside out of the cold winter night, and settle in with my phone or iPad and mess around exploring the sky, in warmth and comfort.  I definitely cannot set up and align my 8” SCT and german equatorial mount so quickly and easily even with the auto-align accessory, plus there’s setting up cameras, laptop, myriad power and USB cables, etc.  Not to forget to mention the disassembly and take down time afterwards again!

That said, I don’t think you should think of the eVscope as astrophotography gear. Everything is integrated to make it easy to observe deep sky objects with color and detail you can’t see without the aid of sensors, but it does not provide the means to capture frames and do your own stacking or more sophisticated and detailed imaging with a non-color sensor and color filters, etc. I would not expect this telescope to compete with custom gear where you have control over everything (and of course have to learn how to do everything). That is not the purpose of its design. Similarly, the cost reflects the benefits of integrating all these pieces (sensor, tracking software, stacking/imaging software, display, power supply, etc) into a small and elegant package without any cables or separate components to manage while also making it dead simple to use. That’s what you’re paying for and that’s the trade-off.

As of February 2020, the provided documentation was pretty good in some areas but a little weak in others. For example, I was surprised how long it took me to find a little blurb buried in a list at the back of the printed guide that explained how to tell if the battery was fully charged.

As of May 2021, the provided online documentation is much improved.  I don’t know what has changed with the printed instructions since I received my scope back in January of 2020, but there’s plenty of information now available from their online knowledge base and more and more questions are getting answered over time.

Citizen Science

As I mentioned above, the eVscope is also designed to participate in crowd-sourced “citizen science”, in partnership with the SETI Institute.  As per their web site, the eVscope “allows users around the world to participate in observing campaigns to image and collect data on objects of special interest to researchers.  In Campaign Mode, image data is automatically sent to a data repository at the SETI Institute’s headquarters in Silicon Valley. The international scientific community can then access unprecedented volumes of image data for specific objects, from thousands of telescopes around the world, at different dates and times. This in turn, can enable new discoveries and enhance our understanding of the universe around us.”

In early February 2020, I had the opportunity to participate in one of these observing sessions.  I received an email providing instructions for a particular target and observing time to collect data on an exo-planet transit of “WASP-43b”.  The procedure involved setting up beforehand, selecting and confirming the target and then starting the Enhanced Vision capture process and letting it run autonomously for several hours as it tracked the target.  Afterwards there was the capturing of 30 seconds of “dark frames” and then initiating the download of data from the telescope followed by the upload to their servers.  While I encountered a few issues along the way (included in my bug list below), it was fun to get to participate in a data gathering session like this.

Here’s a more recent example of results from a “citizen science campaign” I was able to participate in. This was an effort to detect occultation by a Jupiter Trojan asteroid (“Palinurus”) on May 27th, 2021:

Sample Views

Here’s a couple of real time recordings of the Unistellar app showing the live view from the eVscope of the Orion Nebula (over 3.5 minutes) and Bode’s Galaxy (over 6 minutes):

       

Here are some images illustrating the views you can generate and enjoy in just minutes with the eVscope.  I’ve included both screenshots of the full image displayed on my phone as well as the circular cropped image that it produces for display in the eyepiece and that it allows you to save from your phone.  (The eyepiece shows only the circular cropped image and it does not display the descriptive text or circular outline.) I have not done any further processing on these images – these are just as they were originally generated by the eVscope app or screenshot-captured off my phone. (Originally, the eVscope app would only save the circular cropped version, but now the app will let you save the full uncropped version.)

The Sony IMX224 Exmor CMOS color sensor used in the eVscope has a resolution of 1305 x 977.  The images saved from the eVscope app are 1280 x 960 and the circular cropped images are 1080 x 1080.

Click on any image below to see the full size version and to browse the gallery:

Flame Nebula NGC 2024

Running Man Nebula NGC 1977

Bode’s Galaxy M81

Orion Nebula M42

Eagle Nebula M16

Andromeda Galaxy M31

Whirlpool Galaxy M51

Lagoon Nebula M8

Eastern Veil Nebula, NGC 6992

Ring Nebula, M57

Original eVscope For Sale

I’ve decided to upgrade to the newer eVscope 2 and so my original eVscope is now up for sale.  I’m asking $2000, including the Unistellar backpack.  Contact me via email: (chris “at” crimdom “dot” net).

Feature Requests

It’s really great that Unistellar is obviously listening to its users and has been steadily improving the software for the eVscope.  Many of my own issues and feature requests have already been addressed.

Here’s my feature requests as of May 2021 (both current and previously implemented), using version 1.3 of the Unistellar app, running on iOS 14 (iPhone 12 Pro and an iPad Pro):

PLEASE NOTE: There’s been a lot of changes to the app which, as of November 2022, is on version 2.1.  I haven’t spent much time with it yet so probably all of these notes are out-of-date now.

  • Enhanced Vision for bright planets: Would it be possible to provide the ability to automatically select and stack very short exposures (only 100’s of milliseconds) when imaging very bright objects like the planets Mars, Jupiter and Saturn?  Currently, Enhanced Vision only operates with very long exposures – obviously necessary for dim, deep sky objects.
  • More Enhanced Vision improvements: Would it be possible to improve the Enhanced Vision processing to better deal with highlights?  Currently all the brighter stars in a field quickly develop into very over-exposed(?), large solid balls.  Perhaps there’s some more finesse that could be done automatically to improve or retain the dynamic range when combining exposures?  Or perhaps provide access to some more advanced exposure controls?
  • Allow information overlay on saved images separate from cropping option: As of version 1.0, you can now choose to save the full, uncropped, undistorted image by choosing to not apply the “image overlay” option under General options.  However, this also removes the useful information text like the object name, exposure time, location and date which would often still be nice to have appended along the bottom of the image.  I suggest that the cropping option and the information overlay option be separate options.
  • Goto support via SkySafari: Would be wonderful to be able to use SkySafari to browse, select and go to targets with the eVscope (as you can with many other telescopes/mounts), as well as to be able to easily see and explore where the scope is currently pointing.
  • Fix “Do not show again” message: This is the message that displays after the message to confirm whether you’d like to save the currently generated image when exiting Enhanced Vision mode.  Two things here: 1) Change that message to “Do not ask again?” which is less awkward and easier to understand what you’re referring to and 2) please stop asking every bloody time!  Once (or at most, once per session) is enough.  If I answer “Yes” that I want to have the confirmation to save, that means “yes, I do”.  So please stop reconfirming over and over again.  You provide the option to turn off the save confirmation in preferences and that’s enough.
  • Display useful status/info: Please provide more status info in the app like current sky coordinates and battery charge state estimate.  (The coordinates are only currently available after the fact in the saved images. There is now a battery charge state icon – no percentage estimate, but still useful – and Enhanced Vision mode now displays elapsed exposure time.  Thank you for that!)
  • “Picture was saved” notification interferes with usage of the app:  (I need to verify whether this still happens in version 1.1.)  After saving an image, a little notification appears at the bottom across the modal tab buttons for a couple of seconds, forcing you to wait until it disappears.  Please move this message elsewhere where it doesn’t get in the way and/or reduce how long it’s displayed.
  • Improve catalog display: The current style of displaying catalog items as a grid of large icons requires that the object names often be truncated.  Also, the large generic icons to indicate the type of object aren’t a great use of the available space.  How about a list-oriented view (and a smaller icon) to make better use of the screen real estate?
  • Shared WiFi connectivity: Would be nice to be able to optionally configure the eVscope to use an available WiFi network instead of its own WiFi so as to 1) support extended WiFi reach, 2) allow devices to access both the internet and the telescope simultaneously, and 3) to avoid the need to always switch to the telescope’s WiFi.

These options or features are now available:

  • IMPLEMENTED – More expansive view through the eyepiece: Unistellar has now announced the eVscope 2.0 which, among other things, now includes an apparently improved eyepiece and display.  I haven’t seen it personally but it sounds like they’ve tried to address this.I’ll leave my original request here:
    I can see that the design of the eVscope was to very much provide an optical, telescope-like viewing experience – which is of course why there is an eyepiece on the scope at all.  However, I think it is a mistake to not maximize the apparent field of view in the eyepiece to provide more of a grand and wondrous view.  To that end, I wonder if you could use a different lens with the eyepiece to really open up the apparent magnification and field of view of the image you’re able to generate.

Currently you see a small constrained view far down the end of a tube.  You should really try to shoot for a big gorgeous panoramic view, a “spacewalk vista”, like what you get with TeleVue’s fantastic, wide apparent field eyepieces.  Could you simply make use of the same kind of optics and/or display technology inside the electronic viewfinders that Sony and other camera manufacturers use in their digital SLR cameras?  These digital display viewfinders do a fantastic job of enlarging the apparent view on these tiny little displays.  They’re a joy to use and provide a much larger, clearer, detailed view than you get from the displays mounted on the backs of these same cameras.  I realize this would require a hardware change but oh, what a view that would be!

Along these same lines, could there be a way to make use of the full uncropped image in the eyepiece?  With relatively large targets, the uncropped view on the phone’s display is much more expansive and enjoyable than the much constrained circular cropped view.  Could there be a way to present the full uncropped rectangular view and allow it to be rotated in the eyepiece to deal with changes in the telescope’s orientation?

  • IMPLEMENTED – Simplify data upload procedure:  This is not something I thought to ask for, but I want to call it out as it’s a very nice improvement.  As of version 1.3, you now just click an “Upload Data” button and the eVscope will park the scope, connect to your available WiFi network and directly upload its data before eventually shutting itself down.  You no longer have to go through a tedious transfer process to your phone and then again to upload it to their servers.  Very nice!
  • IMPLEMENTED – Send Observation Parameters: As of version 1.3, there is now a mechanism to load observation parameters into the app by merely clicking a link/URL.  This is very nice to see and use, thank you!  Given the eVscope’s ability to participate in “citizen science” observations and data collections, it seems like there should be a more direct way to send observation parameters (like RA/Dec coordinates and exposure/gain settings) to the scope from an emailed observation request.  Perhaps encoded in a URL that’s interpreted by the Unistellar app?  It’s kinda silly that you have to transcribe lengthy coordinates from an email on the phone to the Unistellar app on the phone.  You can’t even copy/paste right now!
  • IMPLEMENTED – Option to save full, uncropped image: As of version 1.0, there is now an option to choose between saving the full frame image or the circular cropped version with the info.  Yay!!  However, it would be nice to get the textual info with the full uncropped version too.  Currently there is only an option to save or share the circular cropped image.  It’s both heavily cropped and mildly distorted around the edges to give it a sort of eyepiece lens effect.  Please provide a built-in option in the app to save the full uncropped, undistorted image!  I should not have to go to the trouble to capture a screenshot of my phone and manually crop it to get the full image.
  • IMPLEMENTED – Allow panning of zoomed view in the app: You can now pan around when zoomed in on the current image in either Live View or Enhanced Vision mode.  It works really well and smoothly and shows your current zoom level.  The app display’s zoom feature currently only zooms into the center of the image.  You can’t zoom in anywhere else in the image.
  • IMPLEMENTED – Allow image save during live sky view: As of version 1.1, you can now save an image based on the current live view, not just an Enhanced Vision view.  The option to save an image is only enabled during the Enhanced Vision mode, not during live sky view.  This would be useful for very bright objects (like the moon and planets) when Enhanced Vision mode doesn’t produce useful results.
  • IMPLEMENTED – Dedicated iPad version of app: As of version 1.1, the iPad version of the app is no longer merely a scaled up version of the phone app.  It now uses the whole screen and takes advantage of all the additional screen real estate.  It looks and works really well and is definitely now my preferred device for working with the eVscope, given the much larger display.  The current iOS app is just scaled up from the phone version on the iPad and does not take advantage of all the additional screen real estate.  The iPad would be an even better platform for viewing/controlling the eVscope if the available space were well utilized.
  • IMPLEMENTED – Display useful status/info: As of version 1.1, Enhanced Vision mode now displays elapsed exposure time. They’ve also added a calculation of remaining time that a given object will be visible in your sky view.  As of 0.9.3, there’s now a battery charge state icon.  No percentage estimate, but still useful.
  • IMPLEMENTED – Improved data upload process: As of version 1.1, they now have changed how the upload data process works: You now provide your internet-connected WiFi credentials to the eVscope, press a button and the telescope performs the upload directly to their servers, without further involvement from your phone or tablet.  And it will optionally park the eVscope and shut it down when it completes the process.  This is a great improvement over the old, incredibly slow download to phone then upload to internet process.
    Why are the download/upload functions so incredibly slow??  Even over local WiFi from the telescope to the phone?  How many gigabytes could that possibly be? I don’t have enough storage on my phone for it to take that long. Is there a bug here?
  • IMPROVED – Allow interaction with other controls when picture adjustment controls are present: As of version 1.1, a simple tap into the image display area will dismiss the controls so it’s not as annoying as it used to be.  Might still be worthwhile to change the interface design so that the picture adjustment controls don’t prevent interaction with the display area.  This could be accomplished by making the bottom area into a tabbed interface, so you can switch between info display, adjustment controls, etc.  Showing the picture adjustment controls (gain, exposure, contrast, brightness) blocks the ability to zoom the image or save it.  I find this inconvenient since I’m always making little tweaks to these controls and want to zoom in or out and save the image in between the adjustments.
  • IMPLEMENTED – Finer positioning control: As of version 1.1, this has improved.  Short taps on the directional arrows does now seem to provide small enough movements of the scope to more easily adjust your view.  Currently, the smallest possible position adjustment is with a single quick tap on the directional arrows around the joystick control but this still moves objects in the display about 1/6 or so across the field of view.  In other words, it’s not possible to move the scope by a smaller amount.

Issues / Bugs

PLEASE NOTE: There’s been a lot of changes to the app which, as of November 2022, is on version 2.1.  I haven’t spent much time with it yet so probably all of these notes are out-of-date now.

As of May 2021, these are the issues I currently see on version 1.3 of the Unistellar app for iOS 14 (iPhone 12 Pro and an iPad Pro):

  • While waiting for the start time of a scheduled “citizen science” observation event, I’ve had the observation parameters (coordinates, gain, exposure time, etc) get cleared after returning to the Unistellar app.  I was able to switch in an out several times over 5-10 minutes and not lose the loaded parameters but then, one minute before the start time, all the settings were cleared out and reset.  Worse, selecting the link to reload the parameters didn’t work!  I had to quickly force quit the app and relaunch it to get the observing parameters to load again.
  • While scrolling through the catalog list, the display will frequently and seemingly randomly jump back to an earlier point in the list – forcing you to have to try to find where you just were again.  Annoying!

The following issues all seem to be addressed – or at least haven’t happened again yet as of the given version.

As of version 1.3:

  • The Unistellar app has trouble reconnecting to the eVscope while Enhanced Vision is still processing after temporarily switching out to another app.  It can take several false starts before you regain control of the eVscope after switching away and back.
  • I’ve had the Enhanced Vision mode hang up after an extended run (28 minutes).  The elapsed time stopped updating (stuck at 28 minutes) and while I could still interact with the app, any pictures adjustment changes would not apply and the controls would just jump back to where they were.  I had to exit Enhanced Vision mode to get things working normally again.

As of version 1.1:

  • Sometimes lots of large random distortions and smears of light and color appear in the display. (I’m not talking about while slewing which would be expected.)

As of version 1.0:

  • The gesture to pinch zoom is buggy and at times it jumps around or refuses to stick.  Strangely, at other times, it works just fine.  I haven’t picked up on a pattern as to when it doesn’t work.  As of version 1.0, this is working much better.  It can still act a little wonky at times but it’s much better.  As of version 1.1, it’s working great and allows you to pan around the zoomed image.
  • The app will immediately crash/exit when you return to the app after being disconnected from the scope or wifi or after having to leave the app for some reason and come back.  The app will also occasionally crash/exit for other unknown reasons in the midst of using it, but I haven’t tried to maintain a list of each circumstance.  Hopefully you’re receiving the iOS crash reports from Apple.
  • I’m seeing a patch or trail of green pixels on most images in the same place.  I’m guessing I’ve got a hot/stuck pixel on my image sensor and the random walk pattern of the pixel is just the effect of combining many images as the field shifts and rotates while tracking the target.  Is there support for subtracting out hot/stuck pixels?  As of version 1.0, the “Take Dark Frame” action also results in removing any stuck pixels from the eVscope’s imaging.
  • Adjusting the contrast/brightness controls while enhanced vision running usually results in the slide control jumping back to its previous position while it completes current frame, and only then it jumps to where you set it.  If you don’t realize it’s going to do this, you’ll try to move it again and again and only get even more confused as to why it keeps jumping around.  It needs to at least stick in the new position even though the next frame is still being generated.
  • On one occasion, the joystick/slew buttons seemed to stop working but after quitting and relaunching the app I found that it had actually slewed but had apparently stopped updating the displayed view.
  • On another occasion, the joystick slew buttons stopped working and the scope view began shifting randomly.  Had to quit/relaunch the app to fix it.
  • Another time the app got stuck in a goto/skewing operation and none of the buttons worked any more and I couldn’t select another target.  The telescope seemed to be no longer tracking the sky, the star field just drifted in the display.  Force-quitting the app didn’t help.  I had to power down the telescope and restart it.
  • Seems like the app or the telescope gets confused if you exit the app while enhanced vision mode is engaged.  Are you supposed to be able to exit and come back while enhanced vision is in progress?
  • Often the app will forcefully halt the enhanced vision mode without warning and without a chance to save what you have so far – it just returns to the live view.  Sometimes there is no message at all and other times there will be an error message like “too bright” even when it appears there is still much more light that could be captured over most of the frame: only a couple of bright stars in a large field of dim nebula, before the nebula has even really become visible.  Please don’t forcefully stop!  (Also, how is it that we were instructed to leave enhanced vision mode running for hours during the recent exo-planet transit when I have had it quit after just 10 minutes or so on a nebula??)
  • I found that both the download and upload sequences would sometimes stop processing after many minutes and I would have to quit and restart them.  This happened several times (4-5 maybe?).
  • On a couple of occasions, the “goto” catalog list would jump or reset its scroll position while trying to scroll through it making it difficult to select the desired item.  Usually it’s fine – I haven’t figured out when this happens yet.
  • Please rework how those three sequential messages work asking whether to automatically save the image after running enhanced vision.  It was frustrating trying to get it to just prompt me to save the image without also asking me two follow-up questions every time.
Also tagged , , | 13 Comments

Unistellar eVscope

I recently received my eVscope from Unistellar and after just a few sessions with it, I thought I would share my thoughts and experiences with it so far – particularly since there wasn’t a lot of info available when I ordered it in back in July of 2019.

Overview

The Unistellar eVscope is quite different from a traditional optical telescope.  It’s a highly integrated and automated digital imaging telescope that enables you to easily find and view deep sky objects in color and detail that would not normally be perceptible to your eye looking through a normal optical telescope.  In addition, the eVscope is designed to let you easily participate in and contribute data to crowd-sourced “citizen science” projects.

The eVscope is a 4.5-inch Newtonian reflector that captures light on a highly sensitive, low noise Sony IMX224 color sensor while using a motorized alt-az tracking mount and autonomous field detection to automatically identify, align and continually track its view of the sky.  Integrated image-processing software takes and combines an on-going series of short exposures to generate an image in almost real time that brings out much of the very low light, color and detail that’s not visible to the human eye even when looking through a normal telescope. This view accumulates over just seconds and minutes and is displayed both in the telescope’s eyepiece (on an OLED display) as well as on a WiFi-connected smartphone.  The whole thing is self-powered via an integrated 9-10 hour rechargeable battery, fits into a large backpack and weighs just under 20 lbs. including the provided tripod.

In other words, it’s quite an impressive level of integration!

Continue reading »

Also tagged , | Leave a comment

Check Your Old Sunglasses and Goggles

We were just about to toss out some old goggles that were in the closet and, on a lark, decided to check their UV protection.  Mine were maybe 10+ year old Oakleys (haven’t used in many years) and Darlene’s were her old Smith’s that she had been using up until last month.  Both are fitted with orange “high contrast” lenses.

Well, both appear to be failing to provide UV protection now.  Not good!

Sunglasses and ski goggles can lose their UV protection over time.  So… double-check your old sunglasses and goggles for UV protection!

I tested with a UV flashlight (“black light”) I picked up recently off Amazon when I decided I didn’t want to trust the unknown Chinese manufacturer (“Oho”) of some new camera goggles I bought.  As it turns out, those new goggles and my old Liquid Image camera goggles I’ve been using for many years pass the UV flashlight test fine.  (As do my and Darlene’s sunglasses.)

You can do a quick and dirty test just using a $20 bill and one of these inexpensive UV flashlights.  You can also get a more professional test (with an actual UV blocking measurement) from your local optician.

Here’s a bad result on left (strip is fluorescing due to UV light getting through lens), good result on right – both are orange tint lenses and many years old:

Leave a comment

Fire and Smoke (and air filter test)

Lots of wildfires in California lately and over 100 in the Santa Cruz area this year so they’ve closed many of the county parks to try to reduce the risk.  This one a couple of weeks ago (the Rincon fire) was quite visible from my place but happily they were able to get it under control in a couple of days:

 

After seeing someone test the air filtration of Tesla’s Model X and its “biodefense mode” against the heavy smoke we’re getting from our wildfires this month, I decided to pick up an inexpensive air quality sensor to test my home’s air as well as my Model 3’s more mundane filtration system.  (The Model 3 doesn’t have the Model X’s fancy “biodefense mode” or huge HEPA filters.)

With the PM2.5 sensor reading 150 μg/m3 (unhealthy) in the San Jose area (due to smoke from the Camp Fire that burned through Paradise, CA), I found that the Tesla Model 3’s air filter would bring things down to the 20’s in the cabin in just a few minutes when recycle air was turned on.  Later, I stopped and made a video to record it falling from 135 to 5 μg/m3 in less than 10 minutes. It climbed back up to the 80’s pretty quickly though when I turned off recycle air and let it bring in fresh air:

This video was even picked up by Teslarati (“Model 3 protects owner…“) and re-tweeted by Elon.

Hi, Elon!  But they didn’t pick up on my follow-up test to compare the Tesla to a Toyota:

Comparison with Toyota RAV4 EV

I decided to repeat the test with my 2013 Toyota RAV4 EV.  This time the starting air quality wasn’t nearly as bad as my initial test but both the Tesla and the Toyota were able to filter the cabin down to a reading of zero from a start of 50 μg/m3 with recycle air turned on. At full fan speed, the RAV4 took about 10 minutes and the Model 3 was able to do it in about 3-4 minutes.

With recycle air turned off (fresh air intake on), the PM2.5 reading in both cars climbed up again. The Tesla was able to hold it around the low to mid 30’s but the RAV4 went up to essentially the outside reading of 50 μg/m3 again.  So the Model 3’s system does work better.

One other thing of note is that the RAV4 ended up with a much higher concentration of TVOC (total volatile organic compounds), even though the vehicle is five years old. Presumably this is off-gassing of some of the materials in the cabin.Oh, and I forgot to turn off A/C in the RAV4 for the test – hence the temperature drop.

Here’s more detail in screenshots – RAV4 start and finish with recycle on:

Image Image

Model 3 start and finish with recycle on:

Image Image

On a subsequent four-hour drive to Tahoe in the Model 3, I encountered much worse air along the way (San Jose, Central Valley, Sacramento, etc). I’d guess the PM2.5 count was easily at least 150 μg/m3 and probably much higher in places, but I avoided opening the windows to test it. I kept the air on recycle and saw that the particle count held down around 20 but sometimes climb to the 30’s. Not bad, given how bad it was outside.

If you’re interested in buying a Tesla, using someone’s referral link will give you a discount (the amount varies over the years) and grants redeemable credits to the person who referred you.  Here’s my Tesla referral link.

Also tagged , , , | Leave a comment

Maker Faire 2018

Darlene and I made it out to Maker Faire again this year:

Click through for the full gallery:

    

Also tagged , , | Leave a comment

First Launch of the Falcon Heavy

On Tuesday, February 6th, SpaceX successfully launched their Falcon Heavy rocket on its inaugural flight, sending Elon Musk’s original Tesla Roadster and “StarMan” on a far reaching orbit around the sun as a test payload.  Happily, I was able to fly out to Florida and experience the launch firsthand from the Kennedy Space Center’s closest available viewing location for the general public – just 3.9 miles away from the launch platform!  (It’s just too bad they haven’t removed the historic-but-no-longer-needed launch tower at LC-39A, as it was sitting between us and the Falcon Heavy.)  Still, it was quite the show with essentially three of their Falcon 9’s strapped together and all twenty-seven engines firing simultaneously!  Not to mention the amazing, never-seen-before, simultaneous return of the two outer boosters back to the nearby landing zone!

I’ve made a video of what it was like to watch (and hear) from our vantage point:

The Falcon Heavy launch as experienced from the closest public viewing area

This viewing location is part of Kennedy Space Center’s “Feel the Heat” ticket package which takes you to the Apollo/Saturn V Center to view a launch and includes a buffet, some commemorative items, and return entrance to the Kennedy Space Center on a later date to enjoy the rest of the exhibits.

You’re given an assigned arrival time some 5-6 hours before the launch to catch your bus (and told not to come earlier) but for this historic event, there were so many people that it took hours to get through the security gates, boarded on a bus (really? loading the buses serially??) and delivered to the viewing area.  By the time we unloaded from the buses at the viewing area, there was slim-pickings for anywhere on the grounds to set up a tripod with a good, unobstructed view because apparently many folks had shown up an hour or more earlier.  Anyway, I staked out a spot between others some three hours before the scheduled launch but had to skip the buffet to keep watch over all my gear.

The launch ended up being delayed several times due to high altitude wind shear and we were all getting a little nervous that they’d miss their launch window for the day (1:30 pm – 4 pm) as they rescheduled all the way up to 3:45 pm.  But then, about an hour before that, they made the call to go ahead and start fueling the liquid oxygen – meaning a go for launch!  Hurrah!

And then 5, 4, 3, 2, 1, …. and great clouds of steam erupted with 27 engines firing – quite the sight!  We couldn’t see the rocket until it cleared that annoying (and unneeded!) tower, but afterwards the light intensity of the exhaust was incredible as it climbed into the sky.  You hardly notice the absence of sound from the rocket with the cheers of the crowd around you, but a few seconds later it starts to come across – and it’s an amazing, stuttering roar.

Then you get to watch it climb and roll and, higher up, begin to build a beautiful column of vapor – which it eventually disappeared into.  After a bit, it reappeared further east as a faint set of exhaust plumes still coursing away.  On the monitor, we could watch and hear announcements of each successful milestone and cheers would erupt each time – like with the separation of the side boosters and their retro-firing to return to Cape Canaveral.

Minutes later the two side boosters appeared in our sky coming down at incredible speed.  We all lost track of them though when they cut their engines again and unfortunately many of us weren’t in a position to see them again when they reignited for their final deceleration over their landing targets. We could of course see the video feed on the monitors, perfectly landing themselves (vertically!), like something out of science fiction – but it wasn’t until after they had landed that their twin sonic booms reached us.  We all of course learned later that the center core didn’t fare so well because two of the three needed engines were unable to restart (not enough ignition fuel) and it crashed into the ocean close enough and hard enough to damage the autonomous drone ship that was waiting for it.  But hey, this was a test flight!

 

The Falcon Heavy is now the most powerful rocket in the world, with the most lifting capability – though it will soon be surpassed by NASA’s upcoming “SLS” rocket as well as SpaceX’s own future “BFR”.

Meanwhile, “Starman” continues his/her epic journey in space:

Click through for my full photo gallery from the launch and my follow-up visit to the Kennedy Space Center:

      

Here are links to more videos of the first Falcon Heavy launch:

Also tagged , | Leave a comment

A Flying Camera

2.9 minute video demonstration of the Mavic Pro (223 MB)

This is a short little video montage of my first few flights around my house with DJI’s Mavic Pro – a fantastic, compact little flying camera platform.  This thing folds down to about the size of a quart-size water bottle, weighs only 2 lbs with a battery and flies for about 25 minutes per charge.  It’s got a tiny gimbal-stabilized 4K camera that can capture up to 4096×2160 video.  (The video above is downgraded to 1280×720 but here’s a short snippet of 4K footage.)  The Mavic Pro has lots of sophisticated smarts on board too: automatic return to home, obstacle avoidance, vision positioning system, object tracking/following/circling, etc.  It maintains a live high definition feed to your phone/controller wth a range of over 4 miles, though FAA rules require that you maintain visual line-of-sight and stay below 400 ft from ground level at all times.  (Also, drones and other remote-controlled aircraft cannot be used in national parks, wilderness areas, ski resorts, around crowds or events, etc. without special permission.)

The Mavic Pro is very fun and easy to fly and it’s amazing how clear and stable the video footage is, even enough to use it as a flying tripod or do time-lapse photography.  Best of all it folds down so nicely to fit easily into a small backpack or carrying case.  It’ll be fun to bring this along on some hiking and biking trips.

Here’s some additional footage – the first from nearby Wilder Ranch State Park, including trying out the Mavic’s “Active Track” flight mode and the second from just north of Pescadero Beach while looking for whales:

Wilder Ranch (50 seconds, 59 MB)

Near Pescadero (85 seconds, 110 MB)

Here’s DJI’s related SkyPixel site where you can see sample drone photography.  Here’s one of many reviews about DJI’s Mavic Pro, if you’re interested in more detail.  I’d recommend buying DJI’s “Fly More Combo Pack” which includes the Mavic Pro but also two extra batteries, two extra propellers, the four-battery charging hub, a car charger cord, an adapter for charging your phone or other USB device from a battery pack and the DJI  carrying case/shoulder bag.  You’ll also likely want to get a lens shade as the Mavic’s camera tends to easily catch sunlight even when not pointed at the sun.  This one works well, while this one is too fragile and breaks easily just mounting it.

And one more bit of footage – sneaking up on Darlene’s family while they were here visiting:

Drone Attack! (60 seconds, 22 MB)

Also tagged , | Leave a comment

Maker Faire 2016

A bit of video from this year’s Bay Area Maker Faire:

Video montage of the 2016 Bay Area Maker Faire (5:35 minutes, 114 MB)

Also tagged , , | Leave a comment

Riding the Segway

While visiting with Darlene’s family in Wisconsin/Minnesota, we went for a Segway ride and tour in La Crosse this past Sunday with Shel, Dan, Kathy and Shelly.  It was my first time trying one and it was a lot of fun.  The handling is very intuitive and responsive – to the point of being a little addictive!  If you have yet to try one, look for a tour or rental in your area (like La Crosse Segway Tours) – it’s definitely worth it!

Click through for the full gallery of pics and video:

  

A short, 75-second video montage from our Segway ride in La Crosse, WI.

Also tagged , | Leave a comment

“Have you played Atari today?”

A little while ago, after reading “Ready Player One” again (Spielberg is making a movie!) and after seeing a couple of tech talks by old Atari game programmers, I was lamenting that I sold my old Atari VCS so many years ago.  Well, Darlene jumped on this comment, found a bundle someone was selling on eBay and surprised me with an early birthday gift.  Yup, an old Atari VCS/2600 (four switch version), a set of controllers and a bundle of game cartridges. Sweet!  (I think my brother and I actually had the six-switch, Sears-rebranded version, but still very cool!) Thanks, Darlene!

I immediately had to go fill out the set of 40 cartridges with a couple of other games I remember us playing a lot.  Of course then was the challenge of hooking it up: the Atari outputs an analog RF TV signal… on an RCA-plug cable.  You can use an adapter like this one to go from RCA plug to coax TV cable input.  I don’t have a TV tuner, so rather than pulling a VCR out of a box in a closet, I hooked it up via my old USB EyeTV tuner/video converter to my MacBook – success!

IMG_4059 IMG_4052

Yeah, you can play any of these games via emulation on a modern computer, or even a smartphone/iPad, but there’s something very different about jamming the physical cartridge into the old physical console and handling that classic Atari joystick.  (And having to use cotton swabs and alcohol to clean the contacts on all of the Activision cartridges to get them to work again!)

It’s been fun to pick these up and rediscover old visual/procedural memories, like the admittedly-simple path through the Adventure maze.  Some titles are only vaguely familiar until you plug them in and see the game again and then go “aha!!”

So… to paraphrase Atari’s old marketing… have you played your Atari today?

to_be_continued

Also tagged , , | Leave a comment