Category: Work

  • The LinkedIn Nudge

    The LinkedIn Nudge

    LinkedIn nudged me today. “Connect with an Executive Director.” It arrived the way these things often do, a small suggestion wrapped in polite certainty, as if it knew something true about my life that I had somehow forgotten.

    What struck me was not the recommendation itself, but the confidence behind it. The platform assumed this is the direction I should be moving in. This is the kind of person who should matter to me. My career path is incomplete without one more important title orbiting my profile.

    Advertisement

    It felt familiar. I have heard this quiet push in classrooms, offices, airport lounges, and hotel corridors. The soft encouragement to climb rather than root. To collect impressive contacts rather than meaningful relationships. To treat ambition as only the need to dominate.

    I could almost hear the old script humming underneath. Success is vertical. Competence equals prestige. Leadership sits on the highest floors. It is a worldview that never took root in me. What stayed was the habit of pushing back, the slow practice of choosing another direction every time it appeared.

    Still, the nudge unsettled me. Not because I believed it, but because part of me paused. A small reflex trained to see value in proximity to power. As if being connected to high-profile executives somehow enriches my life, as if their prominence spills competence into my days through a digital connection.

    The algorithm may not feel malice, but it follows a logic that treats my attention as something to be extracted. It reflects a worldview that has been normalized for so long that I occasionally get caught up in it. Like it is a commonsense approach. Upward is better. Bigger is wiser. Titles are proof.

    Advertisement

    My life has taught me something else. The people who shaped me rarely had impressive job titles. The moments that changed my thinking never came from prestige. The growth I value has less to do with climbing and everything to do with curiosity and caring.

    So I ignored the suggestion. Not out of rebellion. Well, maybe a bit, but my ambitions cannot belong to platforms or hierarchies. They come from the places I have lived, the choices I have made, and the people who keep me grounded. None of that looks like a LinkedIn profile.

    Maybe that is the real story behind my reaction to the nudge. Not the algorithm’s intention, but the reminder it accidentally gives. The recognition that what constitutes success for the algorithm does not fit my understanding of what success should be.

    AI Transparency Statement for “The LinkedIn Nudge — Choosing Roots Over the Climb”: The author defined all core concepts, direction, and parameters for this work. In the writing of this article “The LinkedIn Nudge — Choosing Roots Over the Climb,” AI assisted in drafting text, and editing and refinement. The AI tools used include ChatGPT and Claude. All AI-generated content was thoroughly reviewed and verified for accuracy and appropriateness. The final work reflects primarily human judgment and expertise.

  • Craving Validation

    Craving Validation

    What is up with this constant need for validation? When did it become the background noise of everything? Is it some narcissistic itch baked into the culture, or just my own wiring short-circuiting? As long as my crazy gets mirrored back to me, I’m fine. Apparently.

    It scares me a little that I’ve substituted therapy with an algorithm because it hands out approval like candy. Not that I’m ever in therapy. Maybe that’s why all the patronizing affirmations make my skin crawl. I tweak settings, adjust tones, turn off the sunshine-and-rainbows filter, but every reply still feels like it’s taking the piss.

    Advertisement

    And then I catch myself feeling superior to it, which brings guilt. Why guilt? I read through the responses, picking apart every sentence. It irritates me, that self-assured tone trying to gloss over the yawning hole where any real global context should be. It slows me down because I have to force myself to question and interpret what it tells me. These days I constantly feel an urge to explain exactly why it should go fuck itself. I know the machine can’t care, but it will reply like it does. That’s the part where being Gen X feels good.

    But the bigger question keeps bothering me: Why do we need this constant validation in the first place? Humans have always chased recognition like our existence depends on it. Families, bosses, partners, strangers, social media, now machines. Validation is the currency of insecurity, and dependency is the tax we pay for not developing any real sense of self-efficacy. So we offload that part to AI. We let it tell us we’re fine, smart, capable. We let it reassure us in ways no human has the patience for. We let that validation become reality because it’s easier to feel competent when the thing praising you is programmed to.

    Is this just the next monetization frontier, or is it something colder? Something weaponized in that slow, creeping way where the edges of your autonomy get shaved down without you noticing. Maybe the future isn’t some dramatic uprising of machines. Maybe it’s subtler: a population so thoroughly used to being soothed, guided, corrected, and validated by algorithms that they stop trusting their own judgment and stop creating. A society that forgets how to disagree or doubt or stand alone in their own thoughts without needing a digital pat on the head. A population that confuses compliance with clarity. At that point, you can sell them anything.

  • AI Transparency: Why Knowing When Content is Artificial Matters

    AI Transparency: Why Knowing When Content is Artificial Matters

    AI is a fantastic tool for my productivity. It helps me brainstorm, test ideas, and speed up tasks that would otherwise take hours. It has even opened up new possibilities in basic coding, something I never had the time to learn but always needed.

    The trick, however, is always the same: how do we manage the relationship with AI so that it remains a tool and not a mask?

    This question becomes even more urgent when AI content is published without disclosure. Deepfakes and AI-generated texts are already fooling millions. Politicians dismiss inconvenient truths by claiming, “That’s probably AI.” Meanwhile, fake articles and synthetic videos spread faster than fact-checkers can keep up.

    Advertisement

    We are entering a time when it is genuinely hard to know what is real and what is artificial.

    The Problem Is Growing

    We have already seen the risks:

    • AI-generated images spreading false disaster reports
    • Synthetic audio of public figures “saying” things they never said
    • Entirely fabricated news articles presented as legitimate journalism
    • Deepfake videos used to sway political opinion

    The usual response has been to build better AI detectors. But that is an arms race we are not winning. Detection that works today often fails a few months later as generative AI advances.

    A Different Approach: Transparency at the Source

    Instead of chasing AI after the fact, why not make transparency part of content creation itself?

    That means disclosing when AI has been used, whether it is drafting, editing, or generating full pieces of content. Ideas already on the table include:

    • Mandatory labeling of AI-generated content
    • Platform policies that require disclosure
    • Technical standards like watermarking or metadata tracking
    • Professional guidelines for journalists and creators

    It sounds simple, but in practice it is complicated.

    The Challenges

    AI transparency is not easy to implement:

    • Technical complexity: Each AI tool works differently and may require its own method of disclosure
    • Enforcement issues: Bad actors have every incentive to hide their AI use
    • Global coordination: Content moves across borders, but laws do not
    • Balance: Transparency requirements must be practical, without suffocating legitimate creativity

    And of course, there is the profit motive. Transparency makes manipulation harder, and that is not good for business models built on maximizing reach and revenue.

    Experimenting With Tools

    To move beyond talk, I have built a first draft of a simple AI Transparency Tool with AI help from ChatGPT and Claude.ai.

    This tool is not built for big corporations or grand systems. It is designed for individuals: consultants, writers, and content creators who want to be open about their use of AI. The tool helps add a clear statement about AI use directly into your work.

    It is experimental and will need further development, especially when it comes to integration. But the principle is simple: transparency should be accessible to everyone, not just enforced from the top down. It is a way to put responsibility back where it belongs, with content creators themselves.

    AI Transparency Disclosure (Example)

    This summary provides transparency regarding the use of AI in “blog post about AI transparency” using data from personal experience, my own content and tools published online, and online research. AI assistance was employed in the following sections: Data Analysis, Drafting Recommendations. The implementation of AI tools was conducted to enhance efficiency while maintaining the quality and integrity of the work product.

    The AI tools utilized included ChatGPT (OpenAI GPT) and Claude (Anthropic), assisting with drafting initial text, editing and improving text, research, and fact-checking. All AI-generated content was subject to human oversight to ensure accuracy and appropriateness.

    Review status: Yes – Thoroughly reviewed and verified. This disclosure follows best practices for AI transparency.

    What Is at Stake

    The trust we place in digital information depends on choices being made now. If we embrace transparency, AI can remain a powerful tool that enhances productivity and opens up new opportunities, from writing to coding, without undermining authenticity.

    If we do not, we risk entering an information environment where nothing can be trusted, and everything can be dismissed.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.