Words That Matter: Seniors’ Opinions

October 15, 2018

In Words that Matter, the newest English senior elective taught by Ms. Land, Mr. Theirmann, Ms. Feidelman, and Ms. Solis, the students were tasked to write an opinion piece about something that they cared about. After reading Strength to Love by Martin Luther King Jr. and Men Explain Things to Me by Rebecca Solnit, students were well equipt to employ writing techniques and rhetorical devices to best make their point. Scroll above the interactive image above to see the different titles of the works — and then select specific articles from the ‘Words That Matter’ menu at the top of the screen.

Print Friendly, PDF & Email

Liquid Gold

Back to Article
Back to Article

Liquid Gold

For the better part of the past century, Americans have not valued water as they should have. Most Americans have seen water as a boundless resource. But in the time it takes you to read this sentence, California will have used around 2.8 million gallons of fresh water. By the time school is over today, that number will have climbed to 7,000 million gallons.

Some will fill toilets, sinks, and showers, but most will be sprayed across the thousands and thousands of agricultural fields in our state. About 70 percent of the water used in California is soaked up by agriculture and irrigation, helping to feed mouths all over the state, country, and world. But since California’s annual population gain is estimated to be around 340,000 for the next 12 years, or as the Public Policy Institute puts it, “adding the size of the city of Anaheim every year”, water demands are going to be increasingly difficult to meet. If we want to ensure every Californian has safe and easy access to drinking water, we need to charge more for water and implement an income-based water-affordability program.

Income-based water essentially means the more money you make, the more you will pay for water, and the less money you make, the less you will pay for water. Philadelphia deployed this program early last year and has found success. Roger Colton, the economist leading these efforts, predicts these systems will reduce shutoffs, increase the amount of water bills paid, and increase the city’s revenue. Baltimore and Detroit are also expected to establish this system in the coming year.  Ohio, Colorado, and New Jersey all use income-based utility charges, but with the exception of water. Why? Because people thought their water would be plentiful forever.

Much of the world relies on aquifers, large underground reservoirs full of fresh water, filled over thousands of years by rainwater. The Ogallala Aquifer in the central US extends through 8 states and provides water for the middle third of America.

This aquifer is the largest in the world, but like other aquifers worldwide its supply is quickly draining. A third of the world relies on these magical pools, and 20 percent need it for agricultural purposes. However, of the world’s most bountiful aquifers, like Ogallala, 57 percent have exceeded sustainability points (where they can’t be refilled to make a significant difference in our lifetimes), and 35 percent are considered ‘significantly distressed’. Major cities rely on these invisible oceans, but soon they may disappear altogether.

Just last year, Cape Town showed the world what happens when a major city runs out of water. In late 2017, the city anticipated ‘Day Zero’, the moment where South Africa’s largest city would run out of water. People panicked, thousands of agricultural workers lost their jobs, tourism plummeted, and political chaos ensued. Their 4 million citizens have had to rely on getting water by waiting in long lines, empty jugs in hand for months. Their water use per capita is 85% lower than that of the average Californian. The top water consumers in the city had their names published on a list for all Capetonians to shame. Cape Town’s Day Zero has been pushed back to 2019, but all that depends on how much they can continue to conserve.

Climate change contributes to this shortage, but growing populations and weak infrastructure are the main culprits. How are an estimated 7.6 billion (and growing) people all supposed to share this resource, with places like Mexico City losing 50 % of all their drinking water to leaky pipes?

There is not a single person on earth who can live without water, yet is the most valuable commodity that exists, which is why we need to charge more for it. A lot more. People won’t stop using water, and increasing rates forces people to conserve more, as well as give water utility companies much needed revenue to fix crumbling infrastructure.

Many cities are already taking these steps. Los Angeles is charging people based on volume (people who use large amounts of water have to pay higher rates). Time of day pricing, seasonal rates, and water surcharges all also help conserve and increase revenue. The money could go to help research more efficient ways of desalination, helping communities that don’t have great access to water, or wastewater treatment sites to improve sanitation and water recycling.

Water is a human necessity, but certain systems need to be set up to ensure us high schoolers can grow up to see everyone’s water needs met. So often major issues like these don’t seem real or seem too abstract until it is too late. If we want to ensure that our children can live in a world where they have safe, easy access to clean drinking water, we need to pay for it. Now.

Print Friendly, PDF & Email

Playing the ED Game: Disparity and Distress

Ever wonder how to instill panic and fear into a seventeen-year-old? Just ask them this simple question: Where are you going to college?

Today, the college application process has been blown way out of proportion. Students are agonizing over a single missed point on the ACT or one less extra-curricular. But that’s not the worst part: the worst part is a tricky little scheme concocted by these higher education institutions called “Early Decision.” For those unfamiliar with this inequitable scam, Early Decision is when a student applies early, and then hears back a decision months earlier than their peers. Seems like a fair trade, no? What we do not often think about, however, is why early decision really exists, and who it really benefits. In truth, it actually benefits very few.

In fact, less well-off families are at a significant disadvantage when it comes to college applications because of Early Decision. While colleges are pretty good at providing need-based financial aid to those who need it, Early Decision hinders that wonderful gift. Roughly two-thirds of full-time college students receive financial aid, but by applying Early Decision, students are not able to compare financial aid packages. If a student applies through regular decision, they are able to compare various offers. Under Early Decision, the student must attend the college, regardless of the financial aid package offered. Thus, only the very wealthy can afford to agree to the cost of a college education before even knowing exactly what that cost is.

On top of that, if a student applies Early Decision, they have a November 1st deadline, compared to January 1st. This means achieving desired test scores by an even earlier date. Tutoring through pricey test preparation companies is becoming more and more prevalent to achieve the highest possible scores, an option only available to the rich and privileged.

With Early Decision, not only is the application due earlier, but the decision of where to attend college is made much earlier too. Students applying ED must know where they want to attend college by November 1st, while those applying through Regular Decision don’t have to decide until May 1st, a whopping six months later. Applying Early Decision means that the student must know with unwavering certainty as a first semester Senior in High School that College X is the perfect fit. As a seventeen-year-old, so much can change in those next six months. How is a student expected to make an informed decision about where to go to college so early on?

One of the most sure-fire ways to confirm whether a school is fit for the student is to actually physically visit the campus. However, most students do not have the luxury of jet-setting around the country, or the world, to tour various colleges of their choice. Therefore, it is only the wealthy who have the ability to make a confident and informed decision so early.

Beyond the socio-economic disparity it creates, Early Decision also fosters an unhealthy culture for students around the college application process. An article I read encourages students to “do everything possible to take advantage” of Early Decision, because it is “the best arrow in [the student’s] quiver, so leaving it there unused just seems foolish.” Students are buying into this mentality. At Head-Royce, the Class of 2013 had 60% of students applying early in some form. A mere five years later, an astounding 90% of the Class of 2018 applied early. Applying to college should not be a strategy game. Applying to college should be about finding the best fit for the student, not about taking advantage of any opportunity to get into the most selective college.

The sole purpose of Early Decision is manipulation of colleges’ perceived prestige. Take Middlebury: the acceptance rate that they parade is 13%, but those applying Early Decision have a 43% chance of getting admitted. That is a hefty 30% difference between the Early and Regular Decision admit rates. Middlebury is no exception. Across the board, countless colleges exhibit similar trends. So let me ask you, who really benefits from Early Decision? I can give you a hint: it isn’t the student.

Not only does Early Decision create blatant disparities between social classes, but it completely distorts the college application process. The intent of Early Decision is to benefit the desires of greedy colleges and universities. Early Decision does not have any student’s best interest in mind, and it must go.

 

Print Friendly, PDF & Email

When it Comes to Your Health, Don’t Do it Alone

Back to Article
Back to Article

When it Comes to Your Health, Don’t Do it Alone

Interpreting a human’s DNA is a complex process that is better left to be conducted by medical professionals. However, in 2017 participants in private DNA analysis reached an all-time high, exceeding 12,000,000 in total. Personal DNA testing, or genomic sequencing, is a profitable business for companies like 23andMe. The market for personal genetic kits is expected to grow to $10.75 billion dollars by 2020 (Ellen). From a big business standpoint this is good news; however, these companies are preying on their customers who have an understandable interest in learning about their health.

Just five years ago, in 2013, the FDA ordered 23andMe to abandon their personal genome sequencing service (Fox). Last year, the FDA totally changed their tone allowing the company to sell trials for 10 diseases (Fox). The motives for this shift seem dubious at best, especially since the FDA acknowledges that the information found is often stress-inducing and incomplete. To remedy this, the FDA requires that “customers first [click] an acknowledgment that they understand the results could cause them anxiety” (Fox). The FDA also recommends that the consumer “speak with a healthcare professional, genetic counselor, or equivalent professional” in conjunction with the evaluation (Fox).

Given these two aspects of the FDA’s stance, why continue to allow private companies to provide this information to individuals? After receiving their results, customers could opt not to speak with a medical professional at all, which would likely result in the test taker being gravely misinformed. In the current model, companies are allowed to collect their money and leave the customer and the medical professionals to sweep up behind them. One must wonder what kind of lobbying the $1.5 billion dollar company, 23andMe, did to change the FDA’s stance (Peterson).

The accuracy of the evaluations and the ability of consumers to process genetic information without counseling or other professional help remains unknown (Fox). Even physicians can have trouble interpreting the data from genetic assessments because there are many more environmental factors that work alongside the DNA (Fox).

Alzheimer’s disease, for one example, is a degenerative brain disease that can harm a person’s memory and cognitive function. Alzheimer’s disease is misdiagnosed regularly. In fact, only 78% of diagnoses are accurate (Reinberg). A false negative diagnosis could cause a patient to underreact if they actually do have the disease, leaving them unprepared when the symptoms appear. Conversely, a misdiagnosis could cause a patient extreme undue stress if they do not have the condition.

Sadly, no blood test or imaging test can accurately diagnose Alzheimer’s all of the time, which is why misdiagnoses occur (Reinberg). Thus, a patient must have ongoing medical care and more robust observation than a single genetic inquiry can provide. Obviously, a patient would be better served if he or she were to have a trained and experienced Alzheimer’s specialist tending to them. As Steven Reinberg writes in his article, “although no cure or effective treatment for Alzheimer’s disease exists, a correct diagnosis is essential because some drugs can delay its progress and help preserve quality of life for as long as possible” (Reinberg). Financial planning is paramount for the sustainability of an Alzheimer’s patient’s care, and the planning should be done well in advance of the patient’s deteriorating stage.

To be sure, private DNA insights can be quite fun and interesting for more trivial attributes — for example, hair color or eye color. Barbara Ellen writes that she “rather enjoyed it on the level of an indulgent genome-oriented ‘pampering session’” (Ellen). However, for grave diseases like Alzheimer’s, medical professionals should be involved.

Barbara Ellen writes, “genetic-testing kits such as these could, if promoted and used responsibly, end up zoned completely away from legitimate science and medicine and placed where perhaps they belong, firmly in the lifestyle-extra zone” (Ellen). I recently did my own genetic investigation. Thankfully, I came back negative for all of the diseases, but I see now why it would have been terrible to get this information alone. When it is personal, irrational thought takes over. We all know the feeling of panic when our brains seem to give in to our instincts. It is the human fight or flight response. It is also why an educated and dispassionate doctor must be involved when it comes to matters of long-term health and safety. Just because consumers can access potentially important information on their own does not mean that they should.

Print Friendly, PDF & Email

Why Society Needs More Empathy

Back to Article
Back to Article

Why Society Needs More Empathy

What is empathy? Well, the true definition depends on what type of empathy you’re defining. Surprisingly, there are two types of empathy: the first is affective empathy which is the similar inner feelings that arise when we see someone else show emotion. In other words, it is a slight mirroring effect that our brains use. The second type is cognitive empathy, or the ability to understand why people display certain emotions and what those emotions mean.

So, why is empathy important? Empathy can bring people together despite their differences and  Empathy can reduce aggression and bullying. A study by Roots of Empathy showed a reduction in aggressive behaviors in kids between the ages 5 and 8 after the organization taught classes trying to increase empathy, compared to an increase of aggressive behaviors in control classrooms. This study also showed a clear increase in both types of empathies to go along with the lowered aggression. If we then translate these results onto adults and teens, increased empathy could go a long way, reducing mass murders, domestic abuse, and assault in general.

Empathy has not only been shown to reduce aggression, but it has also been shown to reduce racial prejudice. In a study conducted in 2011, a group of individuals were asked to take the perspective of Glen, a black man who is discriminated against, and then analyze the situation in which he discriminated against. This group showed far less racial bias than a group who were told to be totally objective. This increased empathy should be used in law enforcement and by judges as a way to combat racial prejudices. And yes, law enforcement deals with high-stress situations in which taking a second to step into the other person’s shoes is far from the most pressing issue, but if they increase their empathy overall, their unconscious racial prejudices might not have such a large impact on the outcome of high-stress situations.

The same study also showed that increased cognitive empathy can make daily interaction far more pleasant. One group was shown a picture of a black man and then told to write about his daily life from his perspective. A second group was shown the same picture and told to write about his daily life from an objective standpoint. A third group was shown the same picture and given no instructions other than to write about his daily life. Then all three groups interacted with a black woman who rated their interactions. She found the interactions with the group told to write from a black man’s perspective the more enjoyable and more positive than the other two groups. There have been studies that show empathy has increased racial bias when interacting with groups of different races. However, in one such study, the topic for discussion was set as racial biases about one of the races in the room. This topic clearly set up the conversation with unrealistic tensions that don’t normally exist in day to day interaction.

After all of this you may be wondering how you can increase your empathy and the empathy of those around you. Roman Krznaric,the author of Empathy: Why It Matters and How to Get It,  suggests that you should consciously try to take the perspective of others on a regular basis to increase your empathy. Other ways to increase empathy can be as simple as discovering commonalities between you and strangers or just being curious about the lives other people live. Increasing empathy in the next generation can be even easier than increasing it within ourselves. Encouraging positive acts and helping to guide the moral compass of little kids can heighten their empathy. Even just telling stories that force you to get inside the minds of characters can increase empathy. And last of all, simply viewing others that you may not know personally as human can help increase your empathy for them.

Print Friendly, PDF & Email

Why Mandatory Voting Can Save the United States’ Democracy

Back to Article
Back to Article

Why Mandatory Voting Can Save the United States’ Democracy

As the New York Times reported, just 60.2% of eligible voters made their voice heard in the 2016 presidential election. In such a close election, a few thousand votes separated the two candidates. Local elections are regularly decided by a few hundred votes. Mandatory voting, or the idea that all citizens must vote in each election, creates a more engaged populace. It can change the results of elections to benefit the largest percentage of the population. If politicians knew that everyone would vote, they would be forced to appease a larger number of people. If more people made their voice heard, the government would be held more accountable by a public who felt more invested in the actions of their government.

The concept of mandatory voting, despite its novelty domestically, is not a new idea internationally. In fact, Australia, Belgium, and Mexico all have compulsory voting, along with 19 other countries worldwide. Although it may seem outlandish to some Americans, developed and undeveloped countries alike have found success with the model.

The words “mandatory” and “compulsory” may be scary words for the freedom-loving citizens of the United States. On its face, the concept might seem dictatorial, as forcing any body of people to complete a given task has negative connotations associated with it. However, “compulsory” does not imply that an offender would be thrown in jail, as the penalties are often just small and nominal. In Australia, a $15-$35 fine is incurred for not voting. In Belgium, the penalty for choosing not to vote in four successive elections is disenfranchisement for just ten years.

These penalties are fairly tame, but they can help create a culture of political engagement. Historically, it’s been clear that compulsory voting correlates with more citizen involvement. When the Netherlands abandoned compulsory voting, the voter turnout dropped 20%, and Venezuela saw a drop in voter turnout of 30% once the mandate was removed. If more of the populace is voting, a herd mentality leads others to vote. Even if a small fine isn’t enough to compel some voters, it will still change the culture.

One question people naturally have is: “what if I don’t like any of the options?” This might seem like the biggest barrier to mandatory voting, but it can, in fact, be mitigated quite easily. Simply adding an “abstain” option next to the candidates allows citizens to vote but still not support either candidate. In this way, citizens are engaged, but they still can opt out of choosing a candidate they hate. It forces the people to consider their choice, rather than just letting the civic duty slip their mind while they sit at home.

Detractors might also say that mandatory voting can’t work because some people work during the day, and they don’t have enough time to get to the polls. However, this problem can be easily solved, too. Voting day should be a national holiday. This solution also serves to benefit voters of lower socioeconomic status. As the Pew Research Center reported, those in a lower socioeconomic class got to the polls less frequently than average in the 2016 election, so a national holiday on voting day would amplify the voices of the most vulnerable.

Compulsory voting doesn’t have to be unreasonable. Both Belgium and Australia offer to waive the compulsion if provided with a valid reason, which would prevent a penalty for people who really can’t afford it. If the US were to implement mandatory voting, the system could be lenient. Just like the governments of Australia and Belgium, the United States government could simply waive the rule if presented with a valid excuse. Ultimately, the goal is to change the culture and increase pride in the system, not to punish citizens.

In the United States today, political engagement is on the rise. Young people are making their voices heard, lots of the population feels very strongly about certain issues, and, more than ever, decisions made today will affect us for hundreds of years to come. It’s important to capitalize on this momentum and make sure that the most vulnerable in our society are heard. Mandatory voting, even despite the negative connotations that come with its title, can unite our nation around a crucial civic duty. Today, press freedoms are under attack, and fascism is on the rise, but mandatory voting can help reverse this dangerous path and save our democracy.

Print Friendly, PDF & Email

Ariana Grande, Mac Miller and the Gendered Culture of Blame and Responsibility

Back to Article
Back to Article

Ariana Grande, Mac Miller and the Gendered Culture of Blame and Responsibility

Nearly three weeks ago, rapper Mac Miller was found dead in his Studio City, CA home of an apparent overdose. Miller had been very vocal about his struggle with drug addiction since he first started rapping in the early 2000s. Yet, when news broke of his death, fans flooded to the social media sites of his ex-girlfriend, pop-singer Ariana Grande’s, shaming her in the comments. They cited his overdose as a response to her recent engagement to SNL star, Pete Davidson, leaving messages like “You lowkey evil” and “THIS IS YOUR FAULT !!” Comments like these received hundreds of likes in support from other social media users. The immediate response to a struggling drug addict’s overdose was to blame his ex-girlfriend, and to claim she was responsible for pushing him over the edge. Grande herself even went on to post a tribute to Miller, writing “i’m so sorry i couldn’t fix or take your pain away.”

Throughout their two year relationship, Grande encouraged Miller to stop using and even took partial responsibility for his death for not being able to fix his addiction. Earlier this year, Miller released a public statement wishing her nothing but happiness in the future, but fans ignored him and continued to barrade her in the months leading up to his death. Grande had to disable comments on her 3,600 Instagram photos to escape the blame of the Internet.

This incident has shown once again what so many women know all too well: we are held responsible for the men in our lives, no matter how self-destructive they may be. Miller’s death is an all too public example of the everyday phenomenon of women feeling responsible and being held responsible for the well-being of men, even if that means sacrificing their own health. Grande was hospitalized as a result of an anxious breakdown, just one day after news broke of Miller’s death.

This event is just the most recent in a long line of female celebrities receiving backlash for the death of a spouse or ex-spouse. In June, TV show host Anthony Bourdain was found dead of an apparent suicide, and the Internet was quick to point a finger at his partner, Asia Argento. Both had been explicitly clear that their relationship was one that did not hew to traditional barriers, yet many argued Bourdain’s suicide was prompted by Argento holding another man’s hand in Italy. People seem to search out reasons for a man’s actions that have nothing to do with him.

Yet, when pop-singer Demi Lovato overdosed this summer no one claimed it was her partner’s fault nor asked where he was during the incident. Everyone sent their thoughts and prayers to her in the hospital and then moved on. And where were the onslaught of concerned Internet-users when Amy Winehouse died in her sleep after drinking too much alcohol? Why was she blamed for not going to rehab sooner and not her partner for failing to fix her?

Since the days of Adam and Eve, females have been made out to be the more responsible sex but without the benefits of being the more systematically powerful sex. From parents asking us to remind our brothers to grab their lunchboxes to promising our teachers we’ll clean up the trash from lunch, even if it wasn’t ours — women are told to watch out for our male friends from the moment we are taught how to tie our shoes.

This culture is not just one we observe within our favorite celebrity relationships. Author Arlie Hochschild calls this care-taking and blame-taking a “second shift,” a job which women have come to accept. Countless times a day in our Head-Royce community, I see my girlfriends remind their guy friends that the All School Fair is this Friday not next, despite the many announcements over the past few weeks. I’ve watched freshmen girls grab their male friends’ calculators as they rush into Harper physics to grab their favorite seat. Our community is another microcosm of the greater societal issue of placing blame where it does not belong.

So where do we go from here? Ariana Grande still receives thousands of tweets every day from angry fans citing her responsible. All over the world, women are the thankless blame-takers while also the assumed caretakers. We subconsciously watch out for male peers, unaware of the mental space that much constant vigilance requires. As a society, we must first become conscious of the roles we play before we can work towards shifting away from this gendered culture.

Print Friendly, PDF & Email

The White Benefit of the Doubt

Brock Turner, a blonde Stanford athlete, sexually assaulted a woman behind a dumpster. He was only sentenced to six months in prison. Charged and convicted on three accounts, he only served three months because of “good behavior.” Turner’s father argued that he should receive probation, not jail time. He argued that his son’s “life will never be the one that he dreamed about and worked so hard to achieve. That is a steep price to pay for 20 minutes of action.” In other words, Turner deserved a second chance because those twenty minutes shouldn’t define him.

Why did Turner deserve a second chance? Why should 20 minutes not define him? Simply put, Turner is a white, privileged man and, therefore, his future had to be considered in the deliberation of the case. White male privilege in our legal system is omnipresent. Being white and male means you are a double beneficiary of a biased system. Whiteness is a defining characteristic in too many cases. In sexual assault cases, white men have a perspective that is valued, it feels, more than the woman accusing him. Cases can quickly devolve into he-said-she-said, where we consider the shades of grey, even when the case is black and white. When a white man is the school shooter, reporters and police ask about his mental health and family life.

This benefit of the doubt is uniquely white in nature. It is white privilege to the highest power. Men of color do not receive the same benefit of the doubt in sexual assault cases or any case for that matter. According to a 2017 study by the United States Sentencing Commission (USSC), black men, on average, receive sentences 20% longer than a white man’s for identical crimes. According to a 2014 University of Michigan Law School study, the incarceration rate in America is 500% higher for black men than white men. Lastly, with no difference except race, black men are 75% more likely to face a charge carrying a mandatory minimum sentence than a white offender. A mandatory minimum sentence means you aren’t released for “good behavior” halfway through your sentence.

The 1989 Central Park Five case perfectly illustrates these injustices. In April, 1989, several African American males were arrested in Central Park for vandalism and general mischief. Hours later, while the boys were already in custody, Trisha Meili was taken to the hospital, after being raped and brutally mutilated. Instead of looking for other suspects, the police decided the boys they had in custody were the perpetrators. Teen Vogue reporter Lincoln Anthony-Blades, in his recap of the case, described that one of the boys “informed the police that he was completely unaware of what happened to her, but the police responded by threatening him with a 25-years-to-life sentence in Rikers Island if they didn’t admit to the crime.” The police coaxed these five young men into confession. Four of them, minors, received sentences for up to fifteen years in prison, and the fifth, charged as an adult, was sent to Rikers Island, a maximum security prison complex.

12 years after sentencing, Matias Reyes, a serial rapist, confessed to the rape of Trisha Meili. Was the Central Park 5 case just lazy police work? No, it was racist police work. These boys were forced to plead guilty, despite there being no evidence to prove that they did it. Blatant racial prejudices were at play here and the five were never given the chance to prove themselves innocent. They were not given the benefit of the doubt or even a fair trial. This case was a police-said problem, which points to deeper racial biases in the legal system.

I’ve been thinking about the Kavanaugh accusations and trials, and how many male senators are willing to jump to his defense shows a problem with where our leaders’ morals lie: leaders who make our laws are uncertain about their definitions of sexual assault and, more generally, how men should treat women; however, their definitions are only in question when looking at privileged white men like themselves. Indeed, being male isn’t enough for Republican leaders to defend you. You must be white and male: the embodiment of privilege.

Donald Trump, during the Central Park 5 case, published an advertisement in the New York News calling for the return of the death penalty, yet we do not see such calls to arms against Kavanaugh, rather we see praise. With Kavanaugh and Turner, we see excuses: drunken teen, it was a mistake, and he’s a good guy. Excuses weren’t made for the Central Park Five. Excuses aren’t made for the victims of police brutality who are shot because of their skin color. We, as a country, need to evaluate those whom we deem credible and for whom we are willing to make excuses.

Print Friendly, PDF & Email

Bungie Scamming Destiny Players Signals the Death of Payable Games

Back to Article
Back to Article

Bungie Scamming Destiny Players Signals the Death of Payable Games

Activision, the gaming development company that owns Bungie, is estimated at a whopping $18.9 billion net worth. Multiple ethical questions have been raised by the gaming community in light of Bungie’s recent greedy monetary agenda. Why is such a successful game developer asking for donations? The monetary exploitation arises from Bungie’s previous success with the Halo franchise. Destiny, their new franchise, is also a futuristic multiplayer shooter game. According to CEO Robert Kotick, the release day for Destiny 1 boasted a monstrous $500 million in retail. Unfortunately, Bungie got too focused on funding innovative games for the future that their new franchise imploded.

In 2015, Activision patented a system that encourages players to buy more in-game content by falsely reporting their progress in-game. The patent’s purpose is to have players make more frequent microtransactions, paying real money for in-game currency. The problem with this system emerges when gaming corporations abuse their power over (mostly) young teen audiences.

The system essentially tags players based on skill level (in-game stats), play time, and availability of friends. Based on the configuration of the engine, newbie players will be targeted by more experienced and valuable customers of Bungie. Consequently, Bungie created a dichotomy of high skill players and low skill players. By no fault of their own, players who paid more to play the game will receive an intangible advantage over new gamers. Microtransactions include cosmetics, weapons, vehicles, and downloadable content (DLC). Players who have been dominated in the competitive matchmaking scene pay more money to be matched against players their skill level. The players themselves don’t know what trap they’ve been lead into, but the microtransaction system is set up to “favor” higher valued Bungie customers.

For example, if I buy a virtual weapon (with real money) Destiny will place me in a multiplayer match where this weapon is particularly effective. This process takes into account everything from the average stats of players using a buyable weapon on a map to the most used weapons of other players in your lobby to counter their playstyle. Therefore, a greedy trick had the unintended outcome of producing a social hierarchy of buyers and non-buyers. Bungie’s (expensive) pay-to-play model reflects the sentiment of the gaming community to do away with games detrimental to your wallet.

Destiny player EnergizerX did some math to help the community understand another controversy. He recorded gameplay on one of his characters and counted the experience points (XP) he gained per minute. Every level, the player is granted a bright engram, a loot box, which can be decrypted for cosmetics, weapons, vehicles, and emotes. However, Bungie applied a scaling factor to an increased rate of XP gain. Playing the game constantly will only amount to a displayed 4% of the XP you should be gaining. There is no limit to XP gain, but the more efficient you are, the less XP will be displayed on your progress bar. Again, Bungie subtly decreases the chance of acquiring (favorable) loot, so players are encouraged to buy these loot bundles called bright engrams. Bungie, caught red-handed, confronted the public on their blog. It reads, “last weekend, we disabled a scaling mechanism that adjusted XP gains up and down without reflecting those adjustments in the UI… the silent nature of the mechanic betrayed the expectation of transparency that you have for Destiny 2”(bungie.net).

Fortnite, a popular battle royal game, has been the centre of the gaming community for the last year. Fortnite has a free-to-play model that has negatively impacted the validity of pay-to-play games. Epic Games, Fortnite’s developer, was able to create immensely popular games without asking for any monetary transaction. So why do games of equal caliber require fees to play? To put it in context, Bungie has amassed close to 100 million players with their entire Halo franchise and Destiny franchise combined, while Epic Games has reported over 125 million downloads of Fortnite alone. Gaming, a fun pastime and now a sport, has traditionally been accessible to only a relatively privileged group because it costs money to buy games and play online, but this unprecedented change in the gaming community has set the standard for games in the future. Gaming is rapidly adapting to the requests of the community and will continue to strive for a low-cost obligation.

 

Print Friendly, PDF & Email

The Truth Behind Where Your Food Comes From

Have you ever wondered how many animals you’ve eaten in the last year? Well, to give you an idea, according to a study published by PBS, the average American eats 156 burgers per year. This abnormally large consumption rate results in the slaughtering of nine billion animals in the United States alone.  

Can this slaughter be justified? Many people defend their meat consumption by calling it “natural.”  In past eras, meat consumption may have been natural, but current meat industry practices differ so greatly from those of our ancestors that this excuse is no longer valid. Firstly, meat was hardly ever wasted. Tribes used many parts of an animal for food, and the rest, for clothing or shelter (Robinson, Tammy).  In tribal societies, people knew where their meat was coming from. They hunted it themselves and performed rituals of respect towards the animal. Today, up to 40% of our meat is wasted, and 72% of Americans know nothing about where their meat is coming from (PR Newswire).  There is also a severe lack of respect for meat in our culture.  For instance, Huffington Post reported that “An undercover video shot by an animal rights group at an Iowa egg hatchery shows workers discarding unwanted chicks by sending them alive into a grinder, and other chicks falling through a sorting machine to die on the factory floor.”  Not only are these current practices unnatural, but they are also immoral.

This is not the only account of the meat industry acting unethically.  According to PETA, “breeder chickens,” are treated horrifically from their first day of life.  Within their first ten days of life, their beaks are removed by hot blades so that they will not peck each other to death; death by pecking is common due to chickens’ frustration due to being confined.  The beak pain is so unbearable for some of these chicks that they cannot eat, so they starve to death. However, this death may be a blessing. The chickens are confined to dirty, dark sheds with hardly any room to move.  These conditions wear down chickens so badly that after a year, they are removed from the shed, slaughtered, and replaced. These practices are far from “natural.”

Unfortunately, It’s not just chickens who are treated horrifically, pigs suffer the same atrocious lives. “During their four-month pregnancies, more than 90% of female pigs are kept in desolate ‘gestation crates’ — individual metal stalls so small and narrow the animals can’t turn around or move more than a step forward or backwards (Debra A. Miller).” This immobility is cruel. These animals are no longer treated as living creatures, instead, they are treated like objects.

Not only is the meat industry inhumane, but it is also a leading cause of the destruction of our environment.  Half the beef produced is raised in feedlots. According to Environment Encyclopedia, These tightly compacted spaces are “a significant source of the pollution flowing into surface waters and groundwater in the United States.”  In addition, the methane produced by the cattle is “twenty times more efficient at trapping heat in the atmosphere compared with carbon dioxide.”  For a single quarter-pound burger, 6.5 pounds of greenhouse gasses are released into the atmosphere (“The Hidden Costs of Hamburgers.”). Therefore, by consuming meat, you are indirectly contributing to the destruction of the planet.  

As well, producing meat wastes enormous amounts of water.  One pound of beef needs more than 2,400 gallons of water to produced (“Meat and the Environment.”).  In contrast, productions of alternate sources of protein are far less wasteful.  To produce 1 pound of tofu, only 244 gallons of water are required (ibid). By switching to a vegetarian or vegan diet, you can save up to 219,000 gallons of water in only a year (ibid).  

It is difficult to know how to respond to these upsetting realities. None of us needs to contribute to this destruction and, there are many steps you can take. First, you can raise awareness! Tell people about the horrors of meat production, and be conscious of where your meat comes from.  Be aware of how the animals you eat were raised. However, the most simple solution to the problem is to stop eating meat. By not consuming animal products you are helping to reduce the number of consumers for large meat factories. You are helping to reduce the number of unethical slaughters of innocent animals.  You are aligning your values and your actions.

Print Friendly, PDF & Email

CTE in the National Football League: Hiding the Evidence of Brain Damage

Back to Article
Back to Article

CTE in the National Football League: Hiding the Evidence of Brain Damage

In a league where any story, no matter how big, can dominate the headlines for weeks, the National Football League has done its fair share of avoiding what should be its biggest headline surrounding the sport: Concussions. The NFL must do a better job of informing their players and implementing some changes to the game itself.  

The NFL has been publicly condemned for its ignorance towards concussions and permanent brain damage and they have attempted to bring quick changes to the league since Chronic Traumatic Encephalopathy was first diagnosed in 2002. This discovery was made by Dr. Bennet Omalu when examining the brain of an ex-Pittsburgh Steeler, Mike Webster, who played as a lineman for 17 seasons. His discovery caught the attention of the public, but it was no surprise to NFL executives who had known about this problem already. For years they refuted mounting evidence on the dangers of football and they attempted to keep this under wraps even as Omalu’s findings were published. Why, then, do collegiate and professional players intentionally expose themselves to the risks that come along with the sport?

The steps that the NFL has taken have been inconsistent at best, but their efforts only deal with part of the problem. Since 2009, when the concussion protocol was first introduced, the league has taken steps to limit the amount of hard-hitting, noticeable head to head contact through the use of penalties, fines, and suspensions. However, according to a recent study in July of last year, there is evidence supporting the idea that brain damage and CTE can both be diagnosed later in life even if there is no record of a concussion. This same investigation led to another discovery that dominated the headlines for weeks before the 2017 NFL season began. Their study which was conducted on the donated brains of 202 football players with 111 being former professional players. Out of those 111, 110 of them were diagnosed with CTE (99%).   

As the 2018 season begins, professional football remains the most popular sport in America. But it has declined 10% in viewership and it is beset these tough questions. New rules have been created around quarterback safety. The league is taking some steps in the right direction towards a less damaging game, but more must be done to prevent CTE.

Although some of these rule changes may be upsetting to fans and former players, it is rules like these that will help lower the risk of professional football. Players today are now more informed than ever and some are even taking early retirement into consideration. But it is still up to the players to make this determination by themselves. The NFL’s approach to this issue in the past has been heavily flawed, but if they could continue to be more open with findings like CTE and continue to use new technology and protocols they could preserve the general style of the game.   

There are more people these days that have taken a step away from the sport because they are taking a closer moral look at what they are watching, but ultimately, football is not a safe sport. If the NFL could do a better job of informing the players of the risks and not trying to cover up the results, then players should be able to make their own choices. Like many other products that are detrimental to our well being such as alcohol and cigarettes, people still consume these regularly. Adults know the risk of what they are doing to their bodies and they do it anyway. This is the same principle that would be ideally practiced in the NFL. Adults making decisions about their own lives and what they want to do with it. If the NFL truly wants to preserve this game in the state that is played now, they must be open and informative and they must continue to create new ways in which to ensure player safety.  

Print Friendly, PDF & Email

Why a Machine Will Steal Your Job

Back to Article
Back to Article

Why a Machine Will Steal Your Job

Since the dawn of time, humans have created tools to make their jobs easier. Technology has lead to the quickest and grandest societal changes throughout human history. According to History.com, the Agricultural Revolution, catalyzed by innovations like the plow and seed press, transformed our world from one with 75% of the population working on farms in 1700 to only 2% today. These tools made farming increasingly efficient, so fewer people were needed to produce the abundance of food we now have. Economies boomed as former farmers specialized and took better and higher skilled jobs.

Technological revolutions like this have happened everywhere, not just in farming. The Industrial Revolution led to an unprecedented level of production and living standards because of machines like the steam engine and cotton gin, which were creating the same goods that humans already could, but faster and cheaper.

When human and machines work together, the results are phenomenal for everyone. Living standards rise and workers specialize, moving into jobs that require more complex thought as machines take over the redundant, labor-intensive ones. However, we are currently undergoing a revolution unlike any seen before. For the first time, we are creating machines capable of thinking and learning; machines capable of taking over the same types of jobs they forced us into.

Specialized artificial intelligence has been better than humans for years. IBM’s Watson, a supercomputer with AI capabilities, answers questions stated in plain English. Seven years ago, Watson dominated the two best Jeopardy players in the world, winning by over $20,000. Within a year, Watson learned faster than the human Jeopardy champions ever could and went on to triple its score (as seen in the photo above)! But didn’t Watson have an unfair advantage? He was hooked up to the internet, right? Actually, that’s not the case, Watson never had access to the internet, it prepared for the show by reading and learning from thousands of dictionaries, encyclopedias, and history books.

You could argue that Jeopardy and IBM’s Watson relies on memorization, and not the applied learning that makes human intellect so unique. However, becoming the best Jeopardy player ever just scratches the surface of what Watson can do. Take one of the most prestigious and intellectually challenging jobs in the world: being a doctor. Here in the US, becoming a doctor is an expensive and demanding 10+ year path. Given all of this, you might think that physicians are irreplaceable, and machines couldn’t do their job better than they can. Believe it or not, Watson can beat out humans here too, and it’s not even close.

According to IBM, when diagnosing lung cancer, oncologists have a success rate of around 50%, the same probability as a coin flip. On the other hand, after just a few years of training, Watson correctly diagnoses lung cancer 90% of the time. According to the Economist, it’s estimated that doctors would need to spend at least 160 hours on medical reading per week to keep up with the cutting edge of research. Obviously, this isn’t possible for people like us, nobody has that much time. However, for Watson, this type of reading is all it does. Watson never sleeps, never takes sick days, and never asks to be paid. When all the advantages of using a machine are taken into account, it’s easy to see why, according to Forbes, AI spending will reach $50 billion by 2020.

What about creativity, isn’t that something that’s unique to humans? It’s tempting to think that the creation of paintings, music, films, and other arts certainly can’t be simulated by a computer. However, machines have tried and succeeded. According to Recode.com, the first AI-produced piece will be auctioned off later this year, and it’s projected to sell for over $10,000. In the world of music, a Princeton undergraduate coded an algorithm, DeepJazz, in less than two days. According to their website, the algorithm can produce endless jazz music for free that is indistinguishable from human-composed jazz when put to a blind test. Any skill that can be learned is a skill that a machine can do.

If machines do exactly what we do but better, faster, and cheaper, why wouldn’t companies look to replace human workers? You might be too young to work now, but you’ll have to face this problem someday. Machines like Watson can already best humans in their most complicated jobs, so is it just a matter of time before machines take every job, even yours?

Print Friendly, PDF & Email

The Face of the NBA: Jordan or James?

Back to Article
Back to Article

The Face of the NBA: Jordan or James?

When asked the question, “Who is the face of the NBA?” most people would answer Michael Jordan.  Although this is the common belief, this is not true. We can go back through history and look at all the greats, such as Magic Johnson, Wilt Chamberlain, Bill Russell, Kobe Bryant.  But there is one man that stands out. His name is LeBron James. Lebron James’ skillset, legacy, and leadership on and off the court show that he should be the face of the NBA, not Michael Jordan.  The first thing to look at when comparing the two players is statistics. In James’ first 14 seasons in the NBA, he shot 50% from the field, Jordan (in his 15 season career) only shot 49%. While this may seem like a minor difference, Jordan is known to be one of the best shooters of all time and James is not.  When comparing rebounding and assists, James edges Jordan out by 1 rebound and 2 assists per game. Although James beats Jordan in the three biggest categories, box plus-minus stat shows the largest gap between the two. This stat measures how much a player contributes per 100 possessions above (or below) a league average player.  James’ plus-minus over his first 14 seasons was 9.1 and Jordan’s was 8.1. The main argument people make regarding Jordan’s G.O.A.T. (greatest of all time) status over LeBron is that Jordan went undefeated in the finals and won 6 rings.  While this is true, Jordan never played against more than 2 all-stars in a finals series and he has only played against 1.666 all-stars on average per finals. James has played against 4 all-stars at once (2017 vs. the Golden State Warriors) and played against 2.125 all-stars on average per finals.  

One cannot truly analyze the greatness of Lebron James without taking his ‘X-Factor’ into account:  One of the special traits James possesses is that he makes the team around him better. It does not matter if he is playing with bottom-tier or below average players; he can make them look like elite players.  In 2009, the Cleveland Cavaliers finished with a record of 61 wins and 21 losses (74.4% win rate). After James left Cleveland to join Miami, the Cavaliers finished with a record of 19 wins and 63 losses (30.1% win rate).  Besides the devastating loss of Lebron James, the Cavaliers still had a nearly identical roster to the previous year. If we compare this situation of Michael Jordan’s first retirement, we see that Jordan did not have nearly the same effect.  The 1992 season, the Chicago Bulls finished with a record of 57-25 (69.5% win rate). After Jordan retired, the 1993 Bulls finished with a record of 55-27 (67.1% win rate). The Lebron-less Cavaliers finished with 19 wins and the Jordan-less Bulls finished with 55 wins.  If James’ “X-Factor” is not shown through these results, I do not know how else to express it.

The off-court legacies of Lebron James and Michael Jordan are much different.  Although Michael Jordan made a huge impact on the popularity of the NBA and changed how basketball shoes are created today, there are some elements to his outside life that are often looked over.  Jordan had a massive gambling addiction. Businessman Richard Equinas revealed in his book Michael and Me: Our Gambling Addiction…My Cry for Help that he “had won over $900,000 from Jordan in golf betting.”  Many people also believe that the reason Jordan retired for the first time was because of his gambling addiction.  On the complete opposite side of the spectrum, LeBron James has never been in a major controversy that has impacted his career.  Off the court, LeBron James is one that inspires. In 2018, James opened a public school for the youth of Cleveland. If it were up to me, the face of the NBA should be Lebron James, not Michael Jordan.

Print Friendly, PDF & Email

The Kids Are Alright

Families vary as much as individual people do, but every parent will agree on one statement: little ones grow up too fast. One second, mommy is fitting her eight-year-old son with his first backpack before he heads to kindergarten, and the next, mother is tidying his old room after he’s moved to a far-flung college. This phenomenon is not new ‒ an accelerated sense of time seems to be an inexorable component of parenthood ‒ but parents of Gen Z (born 1995-2014) may be more on the nose than their parents were.

We are currently living in “The Age of Information,” where access to seemingly infinite knowledge is granted with ease by the internet. As increasing numbers of children receive devices, they become exposed to a wealth of content that includes subject matter previously deemed inappropriate for children, such as sex and drugs. And while their familiarity with such ideas is premature, they are also granted insight into topics that would not have crossed the minds of children a generation earlier, such as politics and wealth inequality. This education at such a young age speeds up mental development and maturity, resulting in a new generation intellectually prepared to tackle the myriad issues on its plate. Vulnerability to explicit content can negatively impact psychosexual development, but the practical implications of an earlier adulthood for a more educated generation make the loss of innocence a fair price.

People online are at their most uncensored, an effect so widespread that psychologists have dubbed it “the disinhibition effect.” Since the vast majority of online influencers are in their twenties, an age full of the vices of post-adolescence, children are exposed to raw realities of sexually charged, drug-addled young adulthood arguably far before they are equipped to handle it.  For example, there is an increasing prevalence of young girls hypersexualizing themselves online. While they own their bodies their behavior is their prerogative, the attention and money that they receive for their antics glorifies a troubling online trend of normalized statutory sexual activity and underage drug usage. Peers that are not necessarily ready for the same activities may see the praise their peers online receive and feel motivated to jump headfirst into the same activities in a plea for the same attention. Additionally, the fact that social media fame is so easy to compare through follower count can lead to low self-esteem and depression when one doesn’t measure up.

But despite these growing pressures that may not be healthy for psychosexual development, it is just a component of an accelerated adulthood that is on the whole, good. The internet and social media expos children to a content of all genres, which inappropriate material is only a small part of. A lot of internet content is informative and even educational. Additionally, the overload of information along with the inherent questionability of internet content has taught younger generations to think critically, questioning headlines rather than taking them as rote. This is crucial in a voter base, as proven by the impact of Facebook’s paid headlines on older generations’ votes during the last election. Additionally, the ability to spread information, and therefore increase awareness on humanitarian issues has made Gen Z more socially compassionate and politically active. According to The Atlantic, volunteer work is becoming a norm among teenagers as it never has been before, and shockingly young civil rights leaders such as Malala Yousafzai are cropping up more and more frequently. This development is not lost on older generations, The New York Times even making the case that the voting age should be lowered to sixteen after the “thoughtful and influential activism of young people” following the Parkland shooting. Without the internet, younger generations would not have the same increase in intellectual maturity and the same understanding of sociopolitical issues that they do currently.

Anything powerful has upsides and downsides, and the internet’s impact on children is no exception. It speeds up development in traditionally adult topics that they may not be physiologically ready for, but also intellectually and critically. So even if parents have to watch their children grow up even faster, the lasting good that the early development has on society as a whole and power that the individual adolescent carries with them outweighs the associated tribulations of adulthood.

Print Friendly, PDF & Email

Social Media is Eroding Our Ability to Detect Fact

Back to Article
Back to Article

Social Media is Eroding Our Ability to Detect Fact

As social media has evolved over time, it has given a voice to many who seldom had one before. Twitter might be the best representation of this phenomenon, as a simple search can reveal a tweet from almost anyone. However, social media has also worsened some of society’s most problematic traits. Media platforms such as Twitter, Instagram, and Facebook have become centered around seeking popularity over sharing facts. We post anything from photoshopped photos to fabricated numbers and statements with little hesitation, and unless you know the perpetrator well, you are left only to believe what your eyes recognize. The pursuit of alike, thumbs up, or retweet can drive the user to abandon their allegiances to reality and sever ties to the truth.

However, unknown to some, this is not limited to the sphere of celebrity. While social media has opened up a whole new way for people to access, identify, and report the facts, many news and opinion outlets have become corrupt in pursuit of popularity. For some, they will label outlandish and unproven opinions as fact and claim they are exclusively reporting the truth; others sit back, identify a target audience, and report anything that appeals to them. Admittedly, the success of these tactics says just as much about us being easily manipulated by the pursuit of popularity, as being aligned with popular opinion makes one believe they are validated. At the same time, these tactics have created an atmosphere where no source can be appropriately trusted, as while one claims fact, another accuses it of pushing an agenda.

For example, the New York Times has been the gold standard of journalism in this country for decades. Recently, the Online News Association, a coalition of online journalists and reporters, listed the New York Times in the gold standard grouping of journalism and reporting. While the group did concede it is slightly left-leaning, the reporting and content are factual and reputable. A survey done by Statista found that 50% of participants rated the Times “very accurate” or better, while only 24% said it was not reliable. Contrarily, analysis done by Punditfact shows that at least 22% of the reporting done at CNN, MSNBC, and Fox is objectively false at heart. Statista polling shows that 33% of pollers found MSNBC untrustworthy, with CNN and Fox scoring even higher. Furthermore, only MSNBC was ranked in the gold standard of reporting by the ONA.

Contrary to some beliefs, the New York Times is not failing. The company reported a record increase of nearly 43% in online subscribers in an annual cycle. Some are adamant about labeling the Times as “failing,” and they voice their statements through one medium more than any other: Twitter.

In my own Twitter research, I searched accounts who had given supported the “failing New York Times” propaganda, and with each I examined their account, searching through trends between users. About 70% of the accounts I analyzed not only had criticized the Times as untrustworthy but had posted some form of admission that they don’t/haven’t actively read the source. This trend is concerning for two potential reasons: Either the user is falsely claiming they don’t read any of the outlet’s work, or they are claiming the Times to be false without having actively read any of their content. Regardless of the reasoning, this can be traced back to a typical scenario we see across social media: The constant chase to be in the ‘popular’ grouping.

Through media like Twitter and Instagram, one looks to try and align their post with what they believe to be the popular trend. From nearly or copying the caption of a celebrities post for one’s own to how one poses or dresses in photos, we follow our cues from those at the top of the popularity hierarchy, often hoping our replications can pull one step closer to them. This falsehood extends beyond the realm of social celebrity, as we take our cues for our political expressions from those of popularities past. We echo the phrasings and statements of those we support politically like a paroquet: Often not understanding the full meaning, but with the conception that doing so better aligns you with the popular thought and action of your desired group.

So what can we do? We’re already too deep in to reverse the trend we have fallen. Besides, doing so would tear apart the fabric of modern-day society and send news outlets into chaos trying to find a new target audience, therefore creating more problems. However, instead of trying to solve it in one leap, there are ways to take small steps to rectify our misconceptions and malpractices. I’m smart enough to know that telling you how to operate online won’t change anything, but can be done is changing how you intake the news and reporting. Abandon practices of reading to confirm or disprove opinions, instead read to learn the fact. Fixate on the numbers, the quotes, and the verifiable evidence, rather than the interpretations and the commentary. Let’s go back to acknowledging the fact before spouting our opinion.

Print Friendly, PDF & Email

Solitary Confinement: Torturous and Detrimental to Mental Health

Back to Article
Back to Article

Solitary Confinement: Torturous and Detrimental to Mental Health

I remember the first time I drove past San Quentin State Prison. At seven years old, I observed the barred windows and the tall barbed wire fences with a sense of confusion. After asking my dad what that building was, he simply replied by saying, “That’s where all the bad guys go.” Over the years, I built my understanding of prisons through media and the talk around me. From my limited information, I assumed that all prisoners deserved their future.

Even after a brief period of research on the topic, I realized the depths of my own misconceptions regarding solitary confinement. For instance, according to NPR, solitary was introduced in the 1800s to see if it was a form of rehabilitation for inmates, but it was soon banished due to clear psychological distress among them. In the 1980s, prisons reestablished the misery in order to reduce violence in the general population. To this day, however, they continue to disregard the immorality of the concept and the results.

In fact, some would liken solitary confinement to torture. AFCS reports that when given this punishment, prisoners are snatched from the general population and shoved in a 5 by 7 foot room. Some get lucky and have a window smaller than the size of printer paper exposing the dank hallway. Others solely rely on a slim slot on the door used to deliver their mush of food. Prisoners look forward to their one hour of physical activity each day, which consists of them walking around a cage outdoors like animals. The rest of the day they are forced to sit on a thin bed; some become trapped in their thoughts. Anger, guilt, and loneliness can build up and lead to serious mental illnesses such as depression, anxiety, and psychosis. Moreover, severe hallucinations and panic attacks are not uncommon. According to Dan Nolan from Frontline, it is extremely unlikely that an inmate will spend less than fifteen days in solitary. Furthermore, studies show that signs of mental distress begin within ten days of confinement. With such extreme lack of human interaction, many prisoners go crazy.

Along with the toll on prisoner’s mental health, I see a conflict with basic decency and morality in treating fellow human beings in such barbaric ways. Thus, I wonder why we continue to segregate prisoners in such a tormenting fashion. Obviously, the prison system is complicated, but I do not believe that this form of solitary confinement is the most just or effective mode of rehabilitation. If the goal of solitary is to reduce prison violence, then separating the inmate from others for a brief amount of time seems reasonable. However, the conditions of their confinement must be altered.

In order to make the punishment more efficient without further damaging the prisoner’s mental state, confinement rooms should not be as closed off as they are currently. If they have larger windows that do not fully lock them away, the inmate will have more human interaction and comfort around other people. Furthermore, increasing their exercising time out of their cell will significantly improve their mental state. Exercise is known to improve mental health, since challenging your body releases endorphins, as told in Psychology Today. If inmates are locked in a cell all day, their mindset becomes stagnant and passive. We, as a society, need a shift in the judicial system to help people who are locked away for years on end. We cannot continue to allow institutions consistently result to this horrendous punishment rather than finding a more just solution.

Solitary confinement is a form of torture that has proven to be ineffective. Not only is it responsible for half of prison suicides, it also works against rehabilitating the inmate, thus leading to less possibility of returning to society (NCBI). Even from the term “prisoners,” we dehumanize them to the level of animals. Through media and education, the stigma around inmates must reduce enough for the public to see that this practice is inhumane and cruel. From there, why wouldn’t we act on our morals and take initiative to help this population serve their time in a less deranged manner?

 

Print Friendly, PDF & Email

Families Split in Foster Care

Every year, more than 250,000 children fall into the foster care system. When a parent cannot take care of a child, they are placed in foster care where they are expected to live in better conditions, but this is not always the case. Once placed in the system, a child is placed in a home with foster parents or a group home. Often, children move in a constant rotation from house to house. This rotation is strenuous on the child and it forces them to have trouble trusting adult figures when all they want is a secure household.

I feel the government should give more financial resources to parents who want to raise their children, but financially cannot afford to. For example, a woman named Anna had four children removed and placed in foster care because she did not have money to take care of them. Anna had no house and relied on “couchsurfing and overcrowded apartments” (Schelbe, Lisa). She explains she wants to raise her own children, “I know I cannot be what the foster mother is giving them…Do I want my kids to be out there on the streets with me? No, I don’t. Do I want them to be fed right, taken care of right? Yes, I do. So that’s why I have them with the foster mom” (Schelbe, Lisa). What if Anna had similar government resources as foster parents, the resources to care for her children. In fact, “in some states, payments to foster parents caring for four kids equal the after-tax income of a $35,000-a-year job. The money is tax-free” (Gail Vida Hamburg). Anna could use this money to provide basic necessities for her children and a stable home for her family.

I feel that there are too many children in the unstable and unreliable foster care system.  Currently, 23,000 children are enrolled in the foster care system until 18 years old (NYFI). When a child is placed in foster care, there is so much that can go wrong. To begin, uncommitted foster parents quit after short terms, “as 60% quit within the first year of becoming a foster parent. 22% percent said they quit because of economics; others cited the lack of support such as backup and respite care as reasons for leaving the program” (Gail Vida Hamburg). When foster parents quit and close their doors to foster children suffer. Not everyone is cut out to be a foster parent, but it is important that those who are ready, stay foster parents for as long as they can. Secondly, these experiences force many children to distrust other adults. Michael B. Pines, a psychologist, says, “every new placement is a loss. The result is that these kids begin [to] not trust anyone” (Craig, Conna).

I feel positive birth family is a great advantage to children who have struggled in foster care system.  Children who prefer living with their own family and are in a relatively safe environment should be allowed to. For many parents, when they have a child, their parental instincts should step in for the sake of the child. This can be motivation for parents to clean up their lives through finding a job, a house, and help. It is heartbreaking when your child is taken away from his family. Many of these children want and need their family, “90% of the children who enter foster care ultimately return home. And as many as 60% who are being prepared to live independently when they grow out of the system instead return to live with their families” (Gail Vida Hamburg). If a child goes home after foster care, the foster care system simply adds more difficulty to the life of the child. The foster care system does not always know which children would like to stay in their home or which children would like to leave their home.

It is time to lower the number of unnecessary children in the foster care system, to create an opportunity for children who need a new, secure home. The future process of the foster care system should include an interview with the child and a member of Child Protective Services (CPS) to find out if the child wants to go into the system or stay with the family, after seeing the conditions they are currently living in. Then, the member of CPS should advise the family to apply for and help secure funding from the government.

Print Friendly, PDF & Email

Feminist Attacks on Male Gamers

The hobby of gaming has always been seen as a waste of time. It has also been used as a scapegoat by media outlets to explain school shootings. Feminists have been the most recent group to put a negative label on gamers. Currently, gamers are labeled as violent, lazy, and anti-feminist by the public. Like all the other claims, the label put on gamers by feminists are based on false or bias claims.

The gaming community is made up of 48% females and 52% males, 6% of females identify as gamers and 15% of males identify as gamers. The rest of the people who didn’t identify as gamers can be classified as casual gamers: these are the people who play facebook games and phone games. Once thought as a male-only community, gaming has evolved to be more inclusive to every gender.

Recently there has been a problem in the community. With the rise of Twitch, everyone is trying to become the biggest streamer. But non-gamer girls have come into the community to exploit pre-teens for their donations by using their sexuality. Twitch’s rules state that you have to be playing video games at all times, but with these streams, video games are second to the streamer. In most cases, the aspect of a webcam to video games is ridiculous. Legendary Lea, Zoie Burgher, Celestia Vega,  Little Fey, and more use the platform to bring in money from subscriptions and donations from 14-16 year-old kids. In particular, Zoie Burgher brought in donations by promising to twerk for a certain amount of money. Additionally, certain people such as Little Fey promise to add viewers to a private Snapchat, where they post sexual content, to people who subscribe for a certain amount of time. Older gamers have expressed disgust for this exploitation and they have tried pushing these types of streamers out of their community for the sake of real female streamers. To feminists, such as Anita Sarkeesian, they are only pushing these women out of the community because they are women, and completely miss the fact that these “gamer” girls are only in the community to take money from impressionable pre-teens.

Another of the biggest problems in the gaming community is the intrusion by non-gamers who are looking to start problems. Anita Sarkeesian, a self proclaimed gamer, is one of these intruders. Sarkeesian was the cause of the #Gamergate situation were some gamers took it upon themselves to verbally attack people like Sarkeesian. During this horrible situation, a few things about Sarkeesian were brought to light. Sarkeesian, during a class visit at Claremont McKenna College in 2010, claimed not to be a gamer and not know anything about video games, but five years later, at another seminar, she claimed to be a gamer all of her life. She then started a youtube series called Tropes versus Women in Video Games. This series was made to expose aspects of games that are problematic for feminists. This series could have been great because I agree that in some cases oversexualization in video games are problematic and more games need female protagonists. But the quality of videos worsen as time went on. Sarkeesian began falsifying information to her viewers about games. For example, in Hitman there is a scene where the protagonist goes through a strip club to assassinate a male target. The goal of the game is to only kill the target and go undetected; killing civilians prevents you from advancing in the game. Sarkeesian completely ignored the goal of the game and began killing the strippers in game. She claimed that the game promoted the murder of women because of this part of the game, and those who enjoyed the game were misogynists. This and many other examples enraged gamers and led to cyber attacks on Sarkeesian. I do not agree with the behavior taken by those who participated in the cyber attacks, but I believe that they decided to do so because behind the screen people feel invincible. This situation could have been prevented if Sarkessian didn’t make claims based on false information or manipulated information.

Recently, Sarkeesian was once again in the news for attacking Tyler “Ninja” Blevins’ achievement of being the first gamer ever to appear on ESPN’s magazine cover. Ninja made a remark that he wouldn’t stream with female streamers out of respect for his wife. Ninja saw that streaming with the opposite gender could create false rumors and damage his marriage. For example, another streamer, Guy “Dr.Disrespect” Beahm, began streaming with a female streamer and in a couple weeks the internet began questioning Guy’s faithfulness to his wife. The rumors caused him to quit streaming for months to repair his strained marriage. At no point did Ninja say that he wouldn’t stream with females because they aren’t as good as men at gaming. He made sure to choose his words correctly to be sure he couldn’t be misquoted and painted as a misogynist. Sarkeesian still tweeted, “wait wait wait isn’t this the guy who said he won’t play with women? cool cool cool way to go @espn elevating the status of a misogynist”. As we look at the choice of words we see she didn’t learn from the #Gamergate situation. By removing the phrase “out of respect for his wife”, all of a sudden Ninja could be seen as a misogynist. Words matter. The way we use language and information matters as well. There are good approaches to everything, but Sarkeesian and many others have approached the situation poorly and, as a result, they receive backlash from the community. Gamers are not anti-feminist in the slightest, but they disapprove of the faces leading feminism into the gaming community. For a movement to be successful the face of the movement has to be honest, open, and reasonable; and gaming has yet to witness a good leader.

Print Friendly, PDF & Email

Leave a Comment

If you want a picture to show with your comment, go get a gravatar.




The Hawk's Eye • Copyright 2018 • FLEX WordPress Theme by SNOLog in