All Logs: Bad!

I heard a rather distressing story last night. A major tech company’s security architects routinely demand requirements like this example:

“Send all logs to security”

This is so bad on so many levels.

What does “all” mean?
— Every debugging instrument in the code? That should slow performance down to a crawl. It might even log sensitive information and leave plenty of  clues about how to exploit the code (“information disclosure”).
— Every possible item in every log that the code makes?
– debugging
– behaviour
– performance
– scale
– etc.
Astute readers will note that some of these are probably at best, only marginally of security interest.

Consider what the inundation of log detail does to the poor analysts who must sift through a mountain of dross for indicators of compromise (IoC). That’s not a job I want.

If you’re giving your developers requirements like “send all logs”, you are causing far more harm than good.

Any savvy developer is going to understand that most of what their code logs isn’t relevant (until perhaps, post-attack analysis. It’s good practice to archive runtime logs “somewhere”).

Slamming developers with unspecific, perhaps impossible requirements burns precious influence in a useless firestorm. You’re letting your partners know that you don’t understand the problem, don’t care about them, and ultimately, do not offer value to their process.

As I’ve said many times, “Developers are very good at disinviting disruptive and valueless people to their meetings.”

Besides, such “requirements” are lazy. If you throw this kind of junk at people, you aren’t doing your job.

Our clients routinely ask us what they should monitor for security. We consider the tech, the architecture, the platform very carefully. To start, we try to cherry pick the most security relevant items.

We usually offer a plan that goes from essentials towards what should become a mature security operations practice. What to monitor from application and running logs typically requires a journey of discovery: start manageably and build.

“All” is a lazy, disinterested answer. Security people need to be better than that.

Cheers,

/brook

Zero Trust Means AppSec

AT&T Cybersecurity sent me an invitation to a zero trust white paper that said:

“No user or application should be inherently trusted.

…key principles:

Principle #1: Connect users to applications and resources, not the corporate network

Principle #2: Make applications invisible to the internet, to eliminate the attack surface

Principle #3: Use a proxy architecture, not a passthrough firewall, for content inspection and security”

Anybody see what’s missing here? It’s in the 1st line: “no user should be…trusted”

Authentication and authorization are not magic!

“content inspection” has proven amazingly difficult. We can look for known signatures and typical abnormalities. Sure.

But message encapsulated attacks, i.e., attacks intended to exploit vulnerabilities in your processing code tend to be incredibly specific. When they are, identification will lag exploitation by some period. Nothing is perfect protection.

This implies that attacks can and will ride in through your authenticated/authorized users that your content protection will fail to identify.

The application must also be built “zero trust”: trust no input. Assume that exploitable conditions will be released despite best efforts. Back in the day, we called it “defensive programming”.

Defensive programming, or whatever hyped buzzword you want to call it today must remain an critical line of defence in every application.

That is, software security matters. No amount of cool (expensive) defensive tech saves us from doing our best to prevent the worst in our code.

At today’s state of the art, it’s got to be layers for survivability.

/brook

 

CISA Gives Us A Starting Point

Everyone struggles with identifying the “right” vulnerabilities to patch. It’s a near universal problem. We are all wondering, “Which vulnerabilities in my long queue are going to get me?”

Most organizations calculate a CVSS (base score or amended) which a solid body of research starting with Alloddi & Massacci, 2014, demonstrates is inadequate to the task.

Exploit Prediction Scoring System (EPSS) is based in the above research, so it could provide a solution. But the web calculator cannot score each open vulnerability in our queue over time: we need to watch the deltas, not a point in time. There’s unfortunately, no working EPSS solution, despite the promise.

CISA have listed 300 “vulnerabilities -currently- being exploited in the wild” in Binding operational directive 22-01.

Finally, CISA have given us a list that isn’t confined to a “Top 10”: Start with these!

Top 10 lists provide some guidance, but what if attackers exploit your #13?

300 is an addressable number in my experience. Besides, you probably don’t have all of them in your apps. > 300. We all can do that much.

The CISA list provides a baseline, a “fix these, now” starting point of actively exploited issues that cuts through the morass of CVSS, EPSS, your security engineer’s discomfort, total vulnerabilities ==> organization “risk”. (If you’re running a programme based upon the total number of open vulnerabilities, you’re being misled.)

https://thehackernews.com/2021/12/why-everyone-needs-to-take-latest-cisa.html

Trojan Source

Trojan Source!

I first read about Trojan Source this morning (ugh, Yet Another Branded Vulnerability: YABV).

Yes, there is a continuing fire hose of vulnerability announcements. But, new techniques are actually fairly rare: 1-3/year, in my experience.

There is nothing new about hiding attack code in source. These are the “backdoors” every software security policy forbids. Hiding attack code in unicode strings by abusing the “bidi override” IS new and important (IMVHO). Brilliant research for which I’m grateful.

I’m a bit unclear about how attack code can supposedly be hidden through bidi override in comments. Every decent compiler strips comments during the transformation from source to whatever form the compiler generates, byte code (java), object, etc. Comments are NOT part of any executable format, so far as I know. And mustn’t be (take up needed space).

Still, attacks buried in string data is bad enough to take notice: every language MUST include string data; modern languages are required to handle unicode, the many language scripts that must be represented, and methods for presenting each script (i.e., bidi override). Which means exactly what the researchers found: nearly every piece of software that parses source into something executable (19 compilers and equivalent) is going to have this problem.

The other interesting research finding is how vendors reacted to being told about Trojan Source and how quickly they are fixing, if at all.

Only 9 vendors have committed to a fix. That is really surprising to me. If you want people to trust your software security, you’ve got to be rigorously transparent and responsive. Doh! What’s wrong with the other 10 vendors/makers? Huh? Let’s get with the program. Refusing to deal, dragging feet, or worse, ignoring appears as though you don’t care about the security of your product(s).

It’ll be interesting to see whether Trojan Source technique actually gets used by attackers and how fast that occurs. I hope that one or more researchers pay attention to that piece of the puzzle. Depending upon the research findings, at least (if not more) 75% of reported vulnerabilities never get exploited. Will Trojan Source? Only time will tell.

Many thanks to Brian Krebs for the excellent write up (below)

https://krebsonsecurity.com/2021/11/trojan-source-bug-threatens-the-security-of-all-code/

Cheers,

/brook

Is API Authentication Enough?

Reading a recent article, Five Common Cloud Security Threats and Data Breaches“, posted by Andreas Dann I came across some advice for APIs that is worth exploring.

The table stakes (basic) security measures that Mr. Dann suggests are a must. I absolutely support each of the recommendations given in the article. In addition to Mr. Dann’s recommendations, I want to explore the security needs of highly exposed inputs like internet-available APIs.

Again and again, I see recommendations like Mr. Dann’s: “…all APIs must be protected by adequate means of authentication and authorization”. Furthermore, I have been to countless threat modelling sessions where the developers place great reliance on authorization, often sole reliance (unfortunately, as we shall see.)

For API’s with large open source, public access, or which involve relatively diverse customer bases, authentication and authorization will be insufficient.

As True Positives‘ AppSec Assurance Strategist, I’d be remiss to misunderstand just what protection, if any, controls like authentication and authorization provide in each particular context.

For API’s essential context can be found in something else Mr. Dann states, “APIs…act as the front door of the system, they are attacked and scanned continuously.” One of the only certainties we have in digital security is that any IP address routable from the internet will be poked and prodded by attackers. Period.

That’s because, as Mr. Dann notes, the entire internet address space is continuously scanned for weaknesses by adversaries. Many adversaries, constantly. This has been true for a very long time.

So, we authenticate to ensure that entities who have no business accessing our systems don’t. Then, we authorize to ensure that entities can only perform those actions that they should and cannot perform actions which they mustn’t. That should do it, right?

Only if you can convince yourself that your authorized user base doesn’t include adversaries. There lies “the rub”, as it were. Only systems that can verify the trust level of authorized entities have that luxury. There actually aren’t very many cases involving relatively assured user trust.

As I explain in Securing Systems, Secrets Of A Cyber Security Architect, and Building In Security At Agile Speed, many common user bases will include attackers. That means that we must go further than authentication and authorization, as Mr. Dann implies with, “…the API of each system must follow established security guidelines”. Mr. Dann does not expand on his statement, so I will attempt to provide some detail.

How do attackers gain access?

For social media and fremium APIs (and services, for that matter), accounts are given away to any legal email address. One has only to glance at the number of cats, dogs, and birds who have Facebook accounts to see how trivial it is to get an account. I’ve never met a dog who actually uses, or is even interested in Facebook. Maybe there’s a (brilliant?) parrot somewhere? (I somehow doubt it.)

Retail browsing is essentially the same. A shop needs customers; all parties are invited. Even if a credit card is required for opening an account, which used to be the norm, but isn’t so much anymore, stolen credit cards are cheaply and readily available for account creation.

Businesses which require payment still don’t get a pass. The following scenario is from a real-world system for which I was responsible.

Customers paid for cloud processing. There were millions of customers, among which were numerous nation-state governments whose processing often included sensitive data. It would be wrong to assume that nation-states with spy agencies could not afford to purchase a processing account in order to poke and prod just in case a bug should allow that state to get at the sensitive data items of one of its adversaries. I believe (strongly) that it is a mistake to assume that well-funded adversaries won’t simply purchase a license or subscription if the resulting payoff is sufficient. spy agencies often work very slowly and tenaciously, often waiting years for an opportunity.

Hence, we must assume that our adversaries are authenticated and authorized. That should be enough, right? Authorization should prevent misuse. But does it?

All software fails. Rule #1, including our hard won and carefully built authorization schemes.

“[E]stablished security guidelines” should expand to a robust and rigorous Security Development Lifecyle (SDL or S-SDLC), as James Ransome and I lay out in Building In Security At Agile Speed. There’s a lot of work involved in “established security”, or perhaps better termed, “consensus, industry standard software security”. Among software security experts, there is a tremendous amount of agreement in what “robust and rigorous” software security entails.

(Please see NIST’s Secure Software Development Framework (SSDF) which is quite close to and validates the industry standard SDL in our latest book. I’ll note that our SDL includes operational security. I would guess that Mr. Dann was including typical administrative and operational controls.)

Most importantly, for exposed interfaces, I continue to recommend regular, or better, continual fuzz testing.

The art of software is constraining software behaviour to the desired and expected. Any unexpected behaviour is probably unintended and thus, will be considered a bug. Fuzzing is an excellent way to find unintended programme behaviour. Fuzzing tools, both open source and commercial, have gained maturity while at the same time, commercial tools’ price has been decreasing. The barriers to fuzzing are finally disappearing.

Rest assured that you probably have authorized adversaries amongst your users. That doesn’t imply that we abandon authentication and authorization. These controls continue to provide useful functions, if only to tie bad behaviour to entities.

Along with authentication and authorization, we must monitor for anomalous and abusive behavour for the inevitable failures of our protective software. At the same time, we must do all we can to remove issues before release, while at the same time, using techniques like fuzzing to identify those unknown, unknowns that will leak through.
 

Cheers,

/brook

Continuing Commentary on NIST AppSec Guidance

Continuing with a commentary on NIST #appsecurity guidance, again, NIST reiterates what we recommended in Core Software Security and then updated (for Agile, CI/CD, DevOps) in Building In Security At Agile Speed:

 

Threat Model

Automate as much as possible

Static analysis (or equivalent) SAST/IAST

DAST (web app/api scan)

Functional tests

Negative testing

Fuzz

Check included 3rd party software

 

NIST guidance call’s out heuristic hard coded secrets discovery. Definitely a good idea. I maintain that it is best to design in protections for secrets through threat modelling. Secrets should never be hardcoded without a deep and rigorous risk analysis. (there are a few cases I’ve encountered where there was no better alternative. these are few and very far between.)

NIST guidance is essentially the same as our Industry Standard, Generic Security Development Life cycle (SDL).

There has been an implicit industry consensus on what constitutes effective software security for quite a while. Let’s stop arguing about it, ok? Call the activities whatever you like. but do them.

Especially, fuzz all inputs that will be exposed to heavy use (authenticated or not!) I’ve been saying this for years. I hope that fuzzing tools are finally coming up to the our needs?

https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity/recommended-minimum-standard-vendor-or-developer

MITRE D3FEND Is Your Friend

Defenders have to find the correct set of defences for each threat. Many attacks have no direct prevention making a many-to-to-many relationship. Threat:defence == M:N 

Certainly some vulnerabilities can be prevented or mitigated with a single action. But overall, there are other threats require an understanding of the issue and which defences might be applied.

While prevention is the holy grail, often we must:

— Limit access (in various ways and at various levels in the stack)
— Mitigate effects
— Make exploitation more difficult (attacker cost)

MITREcorp D3FEND maps defences to attacks

“…tailor defenses against specific cyber threats…”

In my threat modelling classes, the 2 most difficult areas for participants are finding all the relevant attack scenarios and then putting together a reasonable set of defences for each credible attack scenario. Typically, newbie threat modellers will identify only a partial defence. Or, misapply a defence.

MITRE ATT&CK helps to find all the steps an attacker might take from initial contact to compromise.

MITRE D3FEND and the ontology between ATT&CK and D3FEND give defenders what we need to then build a decent defence.

https://www.csoonline.com/article/3625470/mitre-d3fend-explained-a-new-knowledge-graph-for-cybersecurity-defenders.html#tk.rss_all

I don’t typically amplify security tool vendors’ announcements.

However, for about 15 years, I’ve been urging vendors to address the millions of developers who do not work for a company large enough to afford million dollar tools, or even tools whose entrance is $10’s of thousands. Millions of programmers cannot afford commercial tools as currently priced; I’m sorry to be so very blunt.

ForAllSecure have done it! The Mayhem for API Free Plan*
This is a significant step in the right direction to everyone’s benefit.

(Please attend my keynote at FuzzCon, August 5th, for what’s wrong with #appsec and why multiple techniques that must include #fuzzing comprise our current best hope for software security.)

Kudos to the folk at ForAllSecure. You’re leading the way towards a brighter, more secure future.

To be fair, a couple of static analyzer vendors have offered open source projects free scanning for quite some time. Open source programmers: there’s no excuse for not taking advantage of these services!

Still, much software is proprietary with lot of that written by startups, small shops, lone programmers. These coders need tools, too. We all suffer because a large percentage of coders don’t have access to a broad selection of commercial grade tools.

Other vendors, are you listening? I hope so.

*50 free scans/month

cheers,

/brook

Threat Model Diveristy

People and collaboration over processes, methodologies, and tools.”

The Threat Modeling Manifesto, Value #2

During our panel, my dear friend and longtime collaborator, Izar Tarandach, mentioned something that occurs so often when threat modelling as to be usual and customary. This happens so often, it is to be expected rather than being merely typical. We were speaking on Threat modelling for Agile software development practices at OWASP BeNeLux Days 2020. Izar was addressing the need for broad participation in a threat model analysis. As an example, he described the situation when an architect is drawing and explaining a system’s structure (architecture) at a whiteboard, and another participant at the analysis, perhaps even a relatively junior member of the team says, “but that’s not how we had to implement that part. It doesn’t work that way”.

I cannot count the number of times I’ve observed a disparity between what designers believe about system behaviour and how it actually runs. This happens nearly every time! I long since worked the necessity for uncovering discrepancies into my threat modelling methods. That this problem persists might come as a surprise, since Agile methods are meant to address this sort of disconnect directly? Nevertheless, these disconnect continue to bedevil us.

To be honest, I haven’t kept count of the number of projects that I’ve reviewed where an implementer as had to correct a designer. I have no scientific survey to offer. However, when I mention this in my talks and classes, when I speak with security architects (which I’m privileged to do regularly), to a person, we all have experienced this repeatedly, we expect it to occur. 

In my understanding of Agile practice, and particularly SCRUM, with which I’m most familiar and practice, design is a team effort. Everyone involved should understand what is to be implemented, even if only a sub-team, or a single person actually codes and tests. What’s going on here?

I have not studied the cause and effect, so I’m merely guessing. I encourage others to comment and critique, please.  Still, as projects get bigger with greater complexity, there are two behaviours that emerge from the increased complexity that might be responsible for disconnection between designers and implementers:

  1. In larger projects, there will be a need to offer structure to and coordinate between multiple teams. I believe that this is what Agile Epics address. Some amount of “design”, if you will (for want of a better word) takes place outside and usually before SCRUM teams start their cycles of sprints. This structuring is intended to foster pieces of the system to be delivered by individual, empowered Agile teams while still coming together properly once each team delivers. My friend Noopur Davis calls this a “skeleton architecture”. The idea is that there will be sufficient structure to have these bits work together, but not so much as to get in the way of a creative and innovative Agile process. That may be a rather tricky razor’s edge to achieve, which is something to consider as we build big, complex systems involving multiple parts. Nonetheless, creating a “skeleton architecture” likely causes some separation between design and implementation.
  2. As this pre-structuring becomes a distinct set of tasks (what we usually call architecture), it fosters, by its very nature, structuring specialists, i.e., “architects”. Now we have separation of roles, one of the very things that SCRUM was meant to bridge, and collapsing of which lies at the heart of DevOps philosophy. I’ve written quite a lot about what skills make an architect. Please take a look at Securing Systems, and Secrets Of A Cyber Security Architect for my views, my experiences, and my attempts to describe security architecture as a discipline. A natural and organic separation occurs between structurers and doers, between architects and implementers as each role coalesces. 

Once structuring and implementing split into discrete roles, and I would add, disciplines, then the very nature of Agile empowerment to learn from implementing practically guarantees that implementations diverge from any plans. In the ideal world, these changes would be communicated as a feedback loop from implementation to design. But SCRUM does not specifically account for design divergences in its rituals. Designers need to be present during daily standup meetings, post-sprint retrospectives, and SCRUM planning rituals. Undoubtedly, this critical divergence feedback does occur in some organizations. But, as we see during threat modelling sessions, for whatever reasons, the communications fail, or all too often fail to lead to an appropriately updated design.

Interestingly, the threat model, if treated as a process and not a one-time product can provide precisely the right vehicle for coordination between doing and structure, between coding and architecture. 

A journey of understanding over a security or privacy snapshot.

The Threat Modeling Manifesto, Value #3

Dialog is key to establishing the common understandings that lead to value, while documents record those understandings, and enable measurement.

The Threat Modeling Manifesto, Principle #4

When implementors must change structures (or protocols, or flows, anything that affects the system’s architecture), this must be reflected in the threat model, which must mirror everyone’s current understanding of the system’s state. That’s because any change to structure has the potential for shifting which threats apply, and the negative impacts from successful exploitations. Threat modelling, by its very nature, must always be holistic. For further detail on why this is so, I suggest any of the threat modelling books, all of which explain this. My own Secrets Of A Cyber Security Architect devoted several sections specifically to the requirement for holism. 

Thus, a threat model that is continually updated can become the vehicle and repository for a design-implementation feedback loop.

Value #5:

Continuous refinement over a single delivery.”

Threat model must, by its very nature deal in both structure (architecture) and details of implementation: engineering. Threat modelling today remains, as I’ve stated many times, both art and science. One imagines threats as they might proceed through a system – often times, a system that is yet to be built, or is currently in the process of building. So, we may have to imagine the system, as well. 

The threats are science: they operate in particular ways against particular technologies and implementations. That’s hard engineering. Most defenses also are science: they do what they do. No defense is security “magic”. They are all limited.

Threat modelling is an analysis that applies the science of threats and defenses to abstracted imaginings of system through mental activity, i.e., “analysis”. 

Even though a particular weakness may not be present today, or we simply don’t know yet whether it’s present, we think through how attackers would exploit it if it did exist. On the weakness side, we consider every potential weakness that has some likelihood of appearing in the system under analysis, based upon weaknesses of that sort that have appeared at some observable frequency in similar technologies and architectures. This is a conjunction of the art of imagining coupled to the science of past exploitation data, such as we can gather. Art and science.

The Threat Modeling Manifesto’s definition attempts to express this:

Threat modeling is analyzing representations of a system to highlight concerns about security and privacy characteristics.

Threat modelling then requires “representations”, that is, imaginings of aspects of the target of the analysis, typically, drawings and other artifacts. We “analyze”, which is to say, the art and science that I briefly described above for “characteristics”, another somewhat imprecise concept.

In order to be successful, we need experts in the structure of the system, its implementation, the system’s verification, experts in the relevant threats and defenses, and probably representatives who can speak to system stakeholder’s needs and expectations. It often helps to also include those people who are charged with driving and coordinating delivery efforts. In other words, one has to have all the art and all the science present in order to do a reasonably decent job. To misquote terribly, “It takes a village to threat model”[1].

Participation and collaboration are encapsulated in the Threat Modeling Manifesto.

Consider the suggested Threat Modeling Manifesto Pattern, “Varied Viewpoints”:

Assemble a diverse team with appropriate subject matter experts and cross-functional collaboration.

And, our Anti-pattern, “Hero Threat Modeler” speaks to a need to avoid over-dependence on a single individual:

Threat modeling does not depend on one’s innate ability or unique mindset; everyone can and should do it.”

Another frequent occurrence during threat modelling sessions will be when a member of the team who would not have normally been included during leader-only threat modelling names a weakness of which the leaders were unaware, or which other active participants simply missed in their analysis. Diversity in threat modelling applies more system knowledge and understanding, which leads to more complete models. 

Since attackers are adaptive and creative and there are a multiplicity of objectives for attack, diversity while modelling counters attacker variety. I will note that through the many threat models I’ve built (probably 1000’s), I’ve been forced to recognize my limitations, that I am perfectly capable of missing something obvious. It’s my co-modelers who keep me honest and who check my work.

Failure to uncover the differences between representation and implementation risk failure of one’s threat model. As Izar explained, even a lead designer may very likely not understand how the system actually works. To counter this problem, both Izar and I, and probably all the co-authors of the Threat Modeling Manifesto account for these misunderstandings through our threat modelling processes. One way is to ensure that a diverse and knowledge collection of people participate. Another is to begin a session by correcting the current representations of the system through the collected knowledge of participants.

I often say that one of the outcomes of broad threat model participation has nothing to do with the threat model. Attendees come away with a fuller understanding of how the system actually works. The threat model becomes a valued communication mechanism with effects beyond its security and privacy purposes. As both art and science, we find it critical to be inclusive rather than exclusive when threat modelling. Your analyses will improve dramatically.

Cheers,

/brook


[1] Please don’t make assumptions about my personal politics just because I mangled the title of one of Hilary Clinton’s books. Thanks

Bad Design Decisions…

Bad Design Decisions…

Sometime in 2018, I noticed a pitch for a new “sport” watch that generated its energy from your body. That sounded cool, so I sent the pitch to my spouse. Then I forgot all about it.

Imagine my surprise when the brand new, version 1 watch appeared out from under our Christmas tree.

Immediately, things started to go South. The watch was supposed to turn on; it remained blank, silent, obviously inoperable no matter how I followed the single page of textual directions. Try as I might, it was just a hunk of metal, plastics, and circuits (presumably).

Naturally, I called the Matrix PowerWatch Support who patiently detailed exactly the instructions I’d already determined were not working. I always find that a bit patronizing, since I did take the time describe everything that I’d tried. They sent me the accessory USB charger, gratis, but still no joy with that particular watch. Another call to support.

I should have taken the initial experience as a sign. There would be many more exchanges over the following year+, until I had realized that Matrix PowerWatch had apparently intentionally burned all of its version one buyers.

After six weeks or so of wrangling, Matrix finally sent me a working watch, which, per instructions, started, paired, and appeared to function.

The PowerWatch keeps time, as expected. It also measures steps, though unlike my couple of early Fitbits, the PowerWatch has no ability to calibrate one’s stride. My longer than typical legs usually take in somewhat more distance than others more typically fitted.

Since the PowerWatch is constantly measuring my body temperature (and its disparity with ambient), the watch will “calculate” calories expended, as well. I found this last to be an interesting data point. I didn’t take the calory count as gospel. But the watch could definitely tell me when I’d had a good work out versus days spent at a computer.

The watch also has a stopwatch, timer, and lap counter. I found these almost unworkable given the way the two physical buttons on the watch work, or don’t work much of the time. So I never used these functions. My mobile phone is a lot easier.

The next sign of trouble was syncing the watch to my phone. The instructions said to go to the watch’s settings, scroll down to “Sync”, then press both buttons. This operation only sometimes worked. Pressing both buttons at the same time requires serious hand contortions that I found painful at times, certainly uncomfortable. Little did I know as I started having troubles that sync failures were a symptom of poor design choices for which there would be no remedy.

More calls and emails to Matrix Support, which often went unanswered for days. I suspect that there was actually only a single support person, maybe a couple? Anyway, as you may imagine, the process and the watch were becoming quite a time sink.

Finally, the support engineer told me that I could simply pull the app display down and the watch would sync. Eureka! Why wasn’t that in any instructions or described somewhere on the PowerWatch web site?

Finally, was I past issues and on to happy use? Um, no.

For, in their wisdom, the PowerWatch software engineers had chosen to either write their own Bluetooth stack, or muck with one in some way as to make the stack predictably unstable.

I have had and continue to use without trouble, dozens of Bluetooth devices, the vast majority of which pair and sync (if needed) with phones, tablets, computers nearly seamlessly. Some early devices had weak antennas, so syncing had to be performed in close proximity. But once paired, syncing just happens. Not so, the Matrix PowerWatch.

The watch would sync for a few days. Then the pairing between watch and phone would break in some way. The sync operation would timeout. If I restarted the app, syncing might occur. Once in a while, a reboot of the phone might get it going.

Meanwhile, all the other Bluetooth devices paired with that phone continued to work fine. Which has led me to believe that it’s the watch software at fault.

But you see, the PowerWatch designers, in their infinite lack of foresight and wisdom, provided no way to simply reboot the watch! Better would have been a way to restart the Bluetooth stack. Observing instability, one prepares for it. Actually, one prepares for instability, period, especially in version 1!

I’m pretty sure that a watch system reboot would have solved this problem. But no, the only recourse is to unpair the watch and reset it to start up mode: factory reset. That’s the only option.

Unpairing the watch removes all ones accumulated fitness data (and any personalization) from the watch and application. One is forced to start over completely. There is no other course.

Worse, if you don’t sync the watch for a few days (who wants to go through this horror even once/day?), it both loses fitness data AND the Bluetooth connection blows up. Everything is lost. This happens regularly, predictably.

One of the first design rules that I learned, years ago, was, “never, ever lose your customer’s data”. Failure one.

Rule 2: for difficult or tricky protocols, try to use a vetted implementation. Roll your own at you and your customers’ risk. Perhaps, failure two?

Rule 3: Plan for errors and crashes, especially from conditions you can’t anticipate during development. Failure three, for sure.

If I’d had a way to restart without losing data, say, turn Bluetooth off, then on again, or even a non-destructive reboot, I wouldn’t be writing this today. All software contains bugs. We have learned how to work around that. Except, not the Matrix folks.

As I worked through my debugging process with PowerWatch Customer Support, they promised me that in March, 2020, there would be a new version that “fixed this problem”.

I hope that I’ve learned how to be patient, especially with software fixes. After a lifetime designing, writing, debugging, securing software, I understand the difficulties in finding and fixing bugs, then getting the fixes into the field. I get it.

Waiting, I lived with resetting my watch every 2-4 weeks, losing whatever accumulated data I had, sure. Amongst the many things competing for my time, a sport watch isn’t really at the top of my list. I’d already spent far too much time on PowerWatch.

March finally arrived. I dutifully attempted the upgrade (that was a three-day battle in and of itself). Guess what? The promised upgrade was only for version 2 watches! My watch’s system was precisely the same after the upgrade as before. All problems still active, nothing changed. I’d been misled, purposely?

As you may imagine, I was upset. I still am. Matrix apparently decided that version 1 customers weren’t worth keeping. Maybe there’s some hardware or operating system limitation that prevents the upgrade? I really don’t care. That shouldn’t be my worry.

Was there an upgrade offer from Matrix? That’s often used as a path forward. Say, an offer of 50% off the retail for version 2? Nope. Crickets.

What I didn’t realize is that there was another lurking reason for burning version 1 customers: the battery died a month later. Now, no watch. Just a pile of metal, plastics, and circuits. The free charger still works. But I’ve no watch to charge on it.

I suspect that Matrix realized that not only is the watch inherently unstable due to its Bluetooth issues and poor design. But they also must have understood that these version 1 devices were going to stop functioning soon, anyway. Matrix threw away its first customers. They’ve turned their back on us. It’s “goodbye and good luck”.

My only recourse is to publish my sad story so that others will be wary of a company that clearly doesn’t much care about at least some of its customers. And, engineers who don’t program defensively, who fail to give users ways to deal with their mistakes.

Don’t buy PowerWatch, unless you understand with whom you’re going to be dealing.

Cheers,

/brook