Why bad technology dominates our lives

kuqezi

Woodpecker
An interesting article that looks deeper into how humans are serving technology and not the other way around.

Just think about your life today, obeying the dictates of technology–waking up to alarm clocks (even if disguised as music or news); spending hours every day fixing, patching, rebooting, inventing work-arounds; answering the constant barrage of emails, tweets, text messages, and instant this and that; being fearful of falling for some new scam or phishing attack; constantly upgrading everything; and having to remember an unwieldly number of passwords and personal inane questions for security, such as the name of your least-liked friend in fourth grade. We are serving the wrong masters.
 

TheFinalEpic

Pelican
Catholic
Gold Member
People that let technology take over their lives are idiots.

We live in the best time to be able to impact the world, start a business, and meet people.

Yet, everyone is addicted to social media?

Use this shit to your advantage, market yourself, your business, and you will be playing life on easy mode.

No other time in history could you make millions of dollars by simply pressing buttons on a computer screen.
 

estraudi

Pelican
Gold Member
Had this exact thing happen last night with my woman.
She was ordering dinner(jimmy john's) off her iphin, I didn't know she was doing it until she mentioned having problems with online and the app. Said shes been ordering for 15.

Ok, no biggie, I offered to go pick it up, 2 miles away(5mins in traffic). She said to let her get it and she'll re-try the order. I resume playing with my son and 20 mins later the idiot is STILL trying & waiting on her phone to work right. I finally told her get off the stupid thing I'll go get the damn food and be back and we'll be eating in 20 mins!

Lo & behold I got back in 17mins and we were eating before 20 mins was up! I told her how stupidly dependent she is on that piece of shit phone. Went in 1 ear & out the other of course. Good choice of words for the title OP. Bad technology indeed.
 

kuqezi

Woodpecker
TheFinalEpic said:
People that let technology take over their lives are idiots.

We live in the best time to be able to impact the world, start a business, and meet people.

Yet, everyone is addicted to social media?

Use this shit to your advantage, market yourself, your business, and you will be playing life on easy mode.

No other time in history could you make millions of dollars by simply pressing buttons on a computer screen.

Fully co-sign!
...and I am in software engineering and design for 10 years now...still it's ridiculous and painful to watch a very unsocial society immersed in social media, a generation of stupid but with smart phones, a sea of possibilities out there to enable anyone to be smart and successful and still we see more and more disabilities of the mind and body!
 

Truth Tiger

Kingfisher
Gold Member
Estraudi's post is something I've dealt with too. I don't use food ordering apps, mostly because there's no or very little quality control. Items aren't made the way you asked, something's missing, or the wrong size, etc. The farther removed a human is from you in the delivery chain, the more likely something will get screwed up. Plus when you have one person who picks the food, another who delivers it, and another who answers the phone/email when you have a problem, there's no accountability or ownership. 'Would you like a credit for the missing item?' doesn't resolve the problem. Instacart is a fucking joke when it comes to fresh vegetables / fruits, at least from what I've seen. I've had and seen issues with grubhub, etc.

I think we've reached peak technology where the people who code may be smart and pay attention to details, but the people who use those apps on the business side aren't aware- and intelligent-enough to ensure a quality experience en toto. I hate to make it a generational thing, but the 20 to 30 somethings seem more likely to be brain-dead. Being raised on instant gratification social media and addictive apps / games creates serotonin-doped useful idiots. It can happen to older people, too.

Technology can be a great tool but usually a human has to be involved and these days with short attention spans, the human ends up being the weakest link, causing the system to suffer. I'm not sure I'd trust a robot to select a ripe mango or a non-mealy apple, though. It's a sad state of affairs to think we'll need that one day. The gap between intelligent and stupid ever widens.
 

Thomas More

Crow
Protestant
TheFinalEpic said:
People that let technology take over their lives are idiots.

We live in the best time to be able to impact the world, start a business, and meet people.

Yet, everyone is addicted to social media?

Use this shit to your advantage, market yourself, your business, and you will be playing life on easy mode.

No other time in history could you make millions of dollars by simply pressing buttons on a computer screen.

We do live in the best time if we are smart and handle things the smart way. However, the vast majority is trapped in nearly every areas of their lives.

We all know the problems with getting married to some near the wall chick who wants to get off the carousel and lock down a beta bux sucker.

It's clear that the modern economy is designed to farm us as taxpayers and consumers.

As the OP's link points out, we seem to live to serve our possessions and our high tech widgets.

The key is to see how the game is played, and then to find a way to not play it, or to play the game on your own terms. Work, but make sure you build up some fuck you money, so you are not under the thumb of an employer. Start your own business, or build up enough investments that you can live off the income. Avoid marriage, or marry with the full red pill understanding so you maximize your changes of success and mitigate the risks. In the case of technology, recognize it as a tool, and don't let it become an obsession.

If you avoid the pitfalls, this is the best time ever to live in, with the greatest opportunity and with a world full of wonders and beauty. If you don't avoid the pitfalls, you can be stuck in a crappy job, living in a lousy apartment complex in suburbia, with a bitchy wife, living paycheck to paycheck with a couple year's income worth of debt.

Choose wisely!
 

Days of Broken Arrows

Crow
Gold Member
The 1950s: People listened to music on tinny-sounding portable devices (transistors) and it sounded awful.

The 1960s-1980s: People listened to music on big stereo speakers and it sounded mindblowing.

The 2000s-2010s: People listen to music on tinny-sounding portable devices (smart phones) and it sounds awful.

This is progress?
 

balybary

Pelican
Catholic
One of the problem with technology is the decreasing reliability. Just an electronic sensor failure, and a car can't start.

https://www.edn.com/electronics-blo...is-your-car-less-reliable-than-it-used-to-be-

Probably the most reliable vehicle I have had was a 1987 Toyota pick-up with a 22R 4-cylinder engine. There was not much in the way of electronics in it – just a transistorized ignition and a radio. The rest was all mechanical or electromechanical. Not much went wrong with it. All I replaced was an igniter and a set of front hubs. I traded it in at about 160,000 miles (260,000km). It was simple enough I could do most anything in the way of basic maintenance myself.

By comparison, many of today's vehicles are loaded with electronics: "power this" and "electronic that." Vehicles even have electronics subsystems in the rear view mirror. Each piece of electronics itself is fairly reliable. Let's say that each stands a 0.1% chance of failure each year after leaving the dealer's lot. The real issue is that each function may have an MCU and associated bus as well as a host of discrete parts. The number of MCUs, FPGAs, and even ASICs, can be mind-boggling alone. What used to be a dashboard with a few mechanical gauges is now host to scores of complex semiconductors. The powertrain uses more. The accessories even more.

Let's say this is a mid-complexity car with around 200 complex semiconductor-based boards. This means there is a one-in-five chance that something will fail in the vehicle each year! If left alone, after 15 years (the length I kept my Toyota), there could be three failed items. On vehicles I've purchased following the Toyota I have had this very thing happen – and not just with minor systems, but with things that affect the engine and brakes.
 

king bast

Kingfisher
Protestant
TheFinalEpic said:
People that let technology take over their lives are idiots.

We live in the best time to be able to impact the world, start a business, and meet people.

Yet, everyone is addicted to social media?

Use this shit to your advantage, market yourself, your business, and you will be playing life on easy mode.

No other time in history could you make millions of dollars by simply pressing buttons on a computer screen.

In other words: "If you can't beat them, join them."
 

TheFinalEpic

Pelican
Catholic
Gold Member
king bast said:
TheFinalEpic said:
People that let technology take over their lives are idiots.

We live in the best time to be able to impact the world, start a business, and meet people.

Yet, everyone is addicted to social media?

Use this shit to your advantage, market yourself, your business, and you will be playing life on easy mode.

No other time in history could you make millions of dollars by simply pressing buttons on a computer screen.

In other words: "If you can't beat them, join them."

Not even in the slightest. Be a producer, not a consumer.
 

king bast

Kingfisher
Protestant
TheFinalEpic said:
king bast said:
TheFinalEpic said:
People that let technology take over their lives are idiots.

We live in the best time to be able to impact the world, start a business, and meet people.

Yet, everyone is addicted to social media?

Use this shit to your advantage, market yourself, your business, and you will be playing life on easy mode.

No other time in history could you make millions of dollars by simply pressing buttons on a computer screen.

In other words: "If you can't beat them, join them."

Not even in the slightest. Be a producer, not a consumer.


When you become a producer of idiot fodder, your lifestyle becomes far more dependent on it than it does for a mere consumer.

You become a high-level idiot, but still an idiot nonetheless.
 

TheFinalEpic

Pelican
Catholic
Gold Member
If you think that marketing and getting your business or product out there makes you an idiot, so be it.

I'll keep making my money.
 

Dodgy

Robin
I don't want to derail this thread but I found this article on a different kind of "bad" technology...
AI can be sexist and racist — it’s time to make it fair

Computer scientists must identify sources of bias, de-bias training data and develop artificial-intelligence algorithms that are robust to skews in the data, argue James Zou and Londa Schiebinger.


When Google Translate converts news articles written in Spanish into English, phrases referring to women often become ‘he said’ or ‘he wrote’. Software designed to warn people using Nikon cameras when the person they are photographing seems to be blinking tends to interpret Asians as always blinking. Word embedding, a popular algorithm used to process and analyse large amounts of natural-language data, characterizes European American names as pleasant and African American ones as unpleasant.

These are just a few of the many examples uncovered so far of artificial intelligence (AI) applications systematically discriminating against specific populations.

Biased decision-making is hardly unique to AI, but as many researchers have noted, the growing scope of AI makes it particularly important to address. Indeed, the ubiquitous nature of the problem means that we need systematic solutions. Here we map out several possible strategies.

Skewed data

In both academia and industry, computer scientists tend to receive kudos (from publications to media coverage) for training ever more sophisticated algorithms. Relatively little attention is paid to how data are collected, processed and organized.

A major driver of bias in AI is the training data. Most machine-learning tasks are trained on large, annotated data sets. Deep neural networks for image classification, for instance, are often trained on ImageNet, a set of more than 14 million labelled images. In natural-language processing, standard algorithms are trained on corpora consisting of billions of words. Researchers typically construct such data sets by scraping websites, such as Google Images and Google News, using specific query terms, or by aggregating easy-to-access information from sources such as Wikipedia. These data sets are then annotated, often by graduate students or through crowdsourcing platforms such as Amazon Mechanical Turk.

Such methods can unintentionally produce data that encode gender, ethnic and cultural biases.

Frequently, some groups are over-represented and others are under-represented. More than 45% of ImageNet data, which fuels research in computer vision, comes from the United States, home to only 4% of the world’s population. By contrast, China and India together contribute just 3% of ImageNet data, even though these countries represent 36% of the world’s population. This lack of geodiversity partly explains why computer vision algorithms label a photograph of a traditional US bride dressed in white as ‘bride’, ‘dress’, ‘woman’, ‘wedding’, but a photograph of a North Indian bride as ‘performance art’ and ‘costume’.

In medicine, machine-learning predictors can be particularly vulnerable to biased training sets, because medical data are especially costly to produce and label. Last year, researchers used deep learning to identify skin cancer from photographs. They trained their model on a data set of 129,450 images, 60% of which were scraped from Google Images. But fewer than 5% of these images are of dark-skinned individuals, and the algorithm wasn’t tested on dark-skinned people. Thus the performance of the classifier could vary substantially across different populations.

Another source of bias can be traced to the algorithms themselves.

A typical machine-learning program will try to maximize overall prediction accuracy for the training data. If a specific group of individuals appears more frequently than others in the training data, the program will optimize for those individuals because this boosts overall accuracy. Computer scientists evaluate algorithms on ‘test’ data sets, but usually these are random sub-samples of the original training set and so are likely to contain the same biases.

Flawed algorithms can amplify biases through feedback loops. Consider the case of statistically trained systems such as Google Translate defaulting to the masculine pronoun. This patterning is driven by the ratio of masculine pronouns to feminine pronouns in English corpora being 2:1. Worse, each time a translation program defaults to ‘he said’, it increases the relative frequency of the masculine pronoun on the web — potentially reversing hard-won advances towards equity. The ratio of masculine to feminine pronouns has fallen from 4:1 in the 1960s, thanks to large-scale social transformations.

Tipping the balance

Biases in the data often reflect deep and hidden imbalances in institutional infrastructures and social power relations. Wikipedia, for example, seems like a rich and diverse data source. But fewer than 18% of the site’s biographical entries are on women. Articles about women link to articles about men more often than vice versa, which makes men more visible to search engines. They also include more mentions of romantic partners and family.

Thus, technical care and social awareness must be brought to the building of data sets for training. Specifically, steps should be taken to ensure that such data sets are diverse and do not under represent particular groups. This means going beyond convenient classifications —‘woman/man’, ‘black/white’, and so on — which fail to capture the complexities of gender and ethnic identities.

Some researchers are already starting to work on this (see Nature 558, 357–360; 2018). For instance, computer scientists recently revealed that commercial facial recognition systems misclassify gender much more often when presented with darker-skinned women compared with lighter-skinned men, with an error rate of 35% versus 0.8%6. To address this, the researchers curated a new image data set composed of 1,270 individuals, balanced in gender and ethnicity. Retraining and fine-tuning existing face-classification algorithms using these data should improve their accuracy.

To help identify sources of bias, we recommend that annotators systematically label the content of training data sets with standardized metadata. Several research groups are already designing ‘datasheets’ that contain metadata and ‘nutrition labels’ for machine-learning data sets (http://datanutrition.media.mit.edu/).

Every training data set should be accompanied by information on how the data were collected and annotated. If data contain information about people, then summary statistics on the geography, gender, ethnicity and other demographic information should be provided (see ‘Image power’). If the data labelling is done through crowdsourcing, then basic information about the crowd participants should be included, alongside the exact request or instruction that they were given.

As much as possible, data curators should provide the precise definition of descriptors tied to the data. For instance, in the case of criminal-justice data, appreciating the type of ‘crime’ that a model has been trained on will clarify how that model should be applied and interpreted.

Built-in fixes

Many journals already require authors to provide similar types of information on experimental data as a prerequisite for publication. For instance, Nature asks authors to upload all microarray data to the open-access repository Gene Expression Omnibus — which in turn requires authors to submit metadata on the experimental protocol. We encourage the organizers of machine-learning conferences, such as the International Conference on Machine Learning, to request standardized metadata as an essential component of the submission and peer-review process. The hosts of data repositories, such as OpenML, and AI competition platforms, such as Kaggle, should do the same.

Lastly, computer scientists should strive to develop algorithms that are more robust to human biases in the data.

Various approaches are being pursued. One involves incorporating constraints and essentially nudging the machine-learning model to ensure that it achieves equitable performance across different subpopulations and between similar individuals8. A related approach involves changing the learning algorithm to reduce its dependence on sensitive attributes, such as ethnicity, gender, income — and any information that is correlated with those characteristics.

Such nascent de-biasing approaches are promising, but they need to be refined and evaluated in the real world.

An open challenge with these types of solutions, however, is that ethnicity, gender and other relevant information need to be accurately recorded. Unless the appropriate categories are captured, it’s difficult to know what constraints to impose on the model, or what corrections to make. The approaches also require algorithm designers to decide a priori what types of biases they want to avoid.

A complementary approach is to use machine learning itself to identify and quantify bias in algorithms and data. We call this conducting an AI audit, in which the auditor is an algorithm that systematically probes the original machine-learning model to identify biases in both the model and the training data.

An example of this is our recent work using a popular machine-learning method called word embedding to quantify historical stereotypes in the United States. Word embedding maps each English word to a point in space (a geometric vector) such that the distance between vectors captures semantic similarities between corresponding words. It captures analogy relations, such as ‘man’ is to ‘king’ as ‘woman’ is to ‘queen’. We developed an algorithm — the AI auditor — to query the word embedding for other gender analogies. This has revealed that ‘man’ is to ‘doctor’ as ‘woman’ is to ‘nurse’, and that ‘man’ is to ‘computer programmer’ as ‘woman’ is to ‘homemaker’.

Once the auditor reveals stereotypes in the word embedding and in the original text data, it is possible to reduce bias by modifying the locations of the word vectors. Moreover, by assessing how stereotypes have evolved, algorithms that are trained on historical texts can be de-biased. Embeddings for each decade of US text data from Google Books from 1910 to 1990, reveal, for instance, shocking and shifting attitudes towards Asian Americans. This group goes from being described as ‘monstrous’ and ‘barbaric’ in 1910 to ‘inhibited’ and ‘sensitive’ in 1990 — with abrupt transitions after the Second World War and the immigration waves of the 1980s.

Getting it right

As computer scientists, ethicists, social scientists and others strive to improve the fairness of data and of AI, all of us need to think about appropriate notions of fairness. Should the data be representative of the world as it is, or of a world that many would aspire to? Likewise, should an AI tool used to assess potential candidates for a job evaluate talent, or the likelihood that the person will assimilate well into the work environment? Who should decide which notions of fairness to prioritize?

To address these questions and evaluate the broader impact of training data and algorithms, machine-learning researchers must engage with social scientists, and experts in the humanities, gender, medicine, the environment and law. Various efforts are under way to try to foster such collaboration, including the ‘Human-Centered AI’ initiative that we are involved in at Stanford University in California. And this engagement must begin at the undergraduate level. Students should examine the social context of AI at the same time as they learn about how algorithms work.

Devices, programs and processes shape our attitudes, behaviours and culture. AI is transforming economies and societies, changing the way we communicate and work and reshaping governance and politics. Our societies have long endured inequalities. AI must not unintentionally sustain or even worsen them.

TLDR, once AI dominates our lives it will be racist and misogynistic. What we need are social and gender scientists and experts to assist in developing a bias-free and gender neutral computer overlord. Link is here: https://www.nature.com/articles/d41...il&utm_campaign=briefing&utm_content=20180720
 

Syberpunk

Pelican
Gold Member
Days of Broken Arrows said:
The 1950s: People listened to music on tinny-sounding portable devices (transistors) and it sounded awful.

The 1960s-1980s: People listened to music on big stereo speakers and it sounded mindblowing.

The 2000s-2010s: People listen to music on tinny-sounding portable devices (smart phones) and it sounds awful.

This is progress?

Are you saying this isn't progress?

 

BlueMark

Woodpecker
Gold Member
In many ways, technology is what you make of it. For those of us who are into self improvement, it is easy to find any kind of information online. Languages, DIY guides, instructional videos, etc. It is a dream come true for autodidacts everywhere.

At the same time, there is a very real race-to the-bottom effect that I think will soon culminate in some sort of reckoning. A lot has already been written on this forum about social media and dating apps affect the behavior of women, so I won't go into that. But this effect plays out in other ways too.

I work with computers and software a lot and keep a close eye on the industry. Here are some of my observations.

1. There is no sense of restraint in the technology industry. Nobody seems to ask "just because we can, does that mean we should?" Just look at all the websites that nag you to download their apps. TripAdvisor, Yelp, Reddit, the list is very long. Some of these even hide some content from you if you use the phone browser instead of the app. If you downloaded an app for all these sites, your smartphone would be bloated with apps.

2. Many of these apps are developed by low-skilled programmers who think they are good programmers just because they attended a coding boot camp. They have no clue how computer software and hardware work underneath the top layer of whatever framework they are using to create web and mobile apps. Little thought is given to efficiency.

Even though computer and smartphone hardware is getting faster, it doesn't feel that way because software keeps getting more bloated. I wish we could have a collective moratorium on increasing software features and focus on optimizing for speed instead.

3. Online banking and payment services are very convenient but their password requirements are a pain in the ass because they vary so much across different services. I find myself constantly having to create new passwords to fulfill the requirements of a website that are different (not necessarily stricter) from all the existing services. Worse, when I set up auto payments, I don't end up logging in for several months. When I eventually do (e.g. to enter a new credit card), I forget the special password that I created and often end up getting locked out due to trying too many passwords and having to call the company. This wouldn't be a problem if these websites showed the password requirements during login to help me remember, but these IT departments are not run by people who care about efficiency or practicality. They'd rather let the customer service department handle the calls to unblock logins after too many failed retry attempts.

These are a few of the patterns I've noticed in the computer and phone technology sector in the past few years. There seems to be a pervasive mindset in each company that "if we don't produce X, our competitors will, and thus beat us." That is probably the economic explanation for the "race to the bottom" effect in technology.
 

Fortis

Crow
Gold Member
Technology only dominates your life if you let it.

Personally, technology makes my life so easy, it's stupid. I've automated a lot of my daily life so I can focus on coming up with ideas to get to the next level.

Set up some rules and follow them. Otherwise, you will be manipulated.
 

RIslander

 
Banned
Syberpunk said:
Days of Broken Arrows said:
The 1950s: People listened to music on tinny-sounding portable devices (transistors) and it sounded awful.

The 1960s-1980s: People listened to music on big stereo speakers and it sounded mindblowing.

The 2000s-2010s: People listen to music on tinny-sounding portable devices (smart phones) and it sounds awful.

This is progress?

Are you saying this isn't progress?



If theres anything in this life that I hate... its how Arnold kicks so much ass but at the same time is a total cuck.
 

Handsome Creepy Eel

Owl
Catholic
Gold Member
TheFinalEpic said:
If you think that marketing and getting your business or product out there makes you an idiot, so be it.

I'll keep making my money.

I think King Bast was referring to the entangling progressive idiocy of fashion industry, app & videogames industry, etc., not marketing consulting services through Facebook.
 

redpillage

 
Banned
Gold Member
Fortis said:
Technology only dominates your life if you let it.

Personally, technology makes my life so easy, it's stupid. I've automated a lot of my daily life so I can focus on coming up with ideas to get to the next level.

Set up some rules and follow them. Otherwise, you will be manipulated.

Exactly. Use your brain, or someone else will use it for you.
 
Top