|Food||Highs & Lows||In the News||Insulin & Pumps|
|Men's Issues||Real Life||Relationships||Type 1|
|Type 2||Women's Issues||Oral Meds||Technology|
I don't always have time to keep up with the Digital Health Group on LinkedIn, but one of the most recent and most active discussions centers around us, people with diabetes, as bellwethers of the effectiveness of self-measurement and "patient engagement" in clinical outcomes — in short, whether or not self-testing helps us modify our diets and other behaviors, and whether or not this has a positve long-term effect on our health.
Put aside for the moment any biases you might have about how to refer to the insulin-resistant, the glycemically sluggish, the pancreatically-challenged. The article started out purporting to suggest that amonst all people with diabetes, less than 65% test more than once a month, before pulling back (in the strike-out correction) to note that the studies applied to type 2 PWD not using injected insulin.
The real meat of the article is that we are the early adopters of self-monitoring, of the Quantified Self, and if we hate what we have to do to stay alive (the "staying alive" part is conveniently omitted), what are the takeaways for designing other self-monitoring equipment and protocols — particularly for the chronically ill, as opposed to the "worried well". But that's not what the discussion — both over at The Atlantic's Web site and LinkedIn's Digital Health Group — is about.
The most prolific contributors to this discussion have been people living with diabetes and clinicians who specialize in treating patients with diabetes.
Many of the old (to us) tropes are raised again and again: people with type 1 cannot not test and continue to live. They hate having to live with a condition that does not respond logically and repeatedly to the management techniques they throw at it, but it beats the alternative. The amount of data — especially if a CGM is involved — is enormous, and the time to examine and analyze them is practically nonexistent.
Many of us with type 2 have been diagnosed later in life, and have multiple chronic illnesses that we are trying to manage and monitor simultaneously (this is not to ignore that many people with type 1 also have multiple medical conditions, some of which can be monitored). Many of us have lives too busy to find the few minutes to monitor, or the longer amounts of time to make sense of the data; others of us are not instructed to monitor, or are not given sufficient training and supplies to monitor effectively, or don't have access to the meal and food choices we need to make to treat our conditions effectively.
Unfortunately, the "type 2 is all your own fault" pops up several times on that discussion, and the loud-voice clinicians insist that the current understanding that type 2 diabetes has strong genetic and autoimmune components, and is not clinically diagnosable until after significant beta cell death has occurred, is WRONG!WRONG!WRONG!WRONG!WRONG!, based on HuffPo articles of specious neutrality (see the comments) and Diabetes Journal articles of almost a decade ago. Without the specific references I needed at hand in a reasonable timeframe, I've had to let these clinicians go on with their misguided notions that type 2 diabetes is 90% our own fault and 90% completely reversible. (This falls under the general understanding that if a person is completely convinced that something is so, nothing you can say or do will change their minds. Don't bother wasting your breath, and find something else to discuss like mature adults.)
Once we get past those tropes, there's some real meat to be had in the conversation.
- **Diabetes is one of the few — if not the only — chronic medical conditions in which the majority of the treatment and management is left to the patient (us).
- **The time required to log, analyze, and act on data is difficult to find, and the inability to control those data (for whatever turn of the screw diabetes has thrown at us that day) makes the monitoring depressing, providing a constant sense of failure and guilt. Paradoxically, not logging and monitoring also exposes us to feelings (and accusations) of failure and guilt. (Sounds like the stereotypical Jewish or Italian mother, doesn't it?)
- **There is no user-friendly, mostly-harmless, not-invading-our-day method of monitoring (except, maybe, a CGM — but those have to be calibrated almost as frequently as normal fingersticks, and the peak/dip/alarm readings confirmed by fingersticks) — in part because the design of our devices and their user interfaces are largely driven by FDA, rather than patient or clinician, specifications.
- **There is no good method of correlating the data from multiple measuring devices (e.g., blood glucose, blood pressure, weight, etc.) in a single spot.
- **There is a difference between how montoring is seen depending upon whether you are a Quantified Selfer (who likes to measure for the sake of it), someone who is trying to achieve a SMART (Specific, Measurable, Achievable, Realistic, Time-bound) goal (such as training for a 5k run), or a person whose life quite literally depends on "wasting" large portions of one's day monitoring, logging, and having to change one's entire food, medication, and exercise plan hour-by-hour based on a set of numbers which sometimes bear no indication of the treatments thrown at them.
Several well-known DOC'ers have contributed to this conversation, including Strangely Diabetic's Scott Strange, Bennet of YDMV, and former dLife® CEO Howard Steinberg. If you've got the time to wade through the rhetoric, you may want to give this thread a look.
Megan was diagnosed in 2009 with Type I. As an RN, she was familiar with the medical side of her diagnosis; learning to be a good patient on the other hand, was and continues to be the challenge of her day to day life. (Read More)