TheTechMargin Weekly Wrap - Creative - Safety - AI News in Bites


Weekly Digest Contents

  1. Brain Food
  2. AI Safety Bites
  3. AI Creative Applications
  4. Friends of TheTechMargin
  5. Tip of the Week

Brain Food

A nation woven from differences or divided by hatred?

What is important can change very quickly.

Anyone who has been laid-off, suffered an injury, lost a loved one, defaulted on debt, experienced trauma, and on and on knows the whiplash shift of focus accompanying unforeseen circumstances.

We, the people of this planet, wish to live our lives the best we can.

There are those who would seek to take that right away from us, however.

The dark hearts of men with self-serving intent have plagued this world since the beginning, and this time is no different than any other in that these dark-hearted, selfish beasts still linger.

This time is different from any other in that the ease at which the poisonous pill of confusion is so easily swallowed.

If you or your child or anyone you know was bullied, you know the corruption that herd mentality causes even in the hearts of typically good people. You have watched the face of a friend turn cold as they choose the side of your abuser and kick you to prove their allegiance is with the ring leader and not you, the childhood friend.

Darkness comes in all forms and will always emerge when we are complacent, unclear, and divided. When we assume the good side is static and the darkness is defeated, we hand over responsibility for our part in the equation.

AI Safety Bites

Pentagon Technologists

The Pentagon is mobilizing its Chief Digital and Artificial Intelligence Office (CDAO) to implement the White House's new National Security Memorandum on AI, ensuring responsible AI development across federal agencies. This includes redefining risk practices and updating policies for AI in weapon systems. The focus is on creating tools and principles that help developers demonstrate their AI's compliance with ethical and safety standards, part of broader efforts to align with international AI safety norms. Read the full article ​here​.

Global AI safety institutes

On November 21-22, 2024, AI experts from nine countries and the EU will meet in San Francisco to launch the next phase of international cooperation on AI safety through a network of AI Safety Institutes (AISIs). This initiative, first announced at the May 2024 AI Seoul Summit, aims to accelerate AI safety advancements globally.

As defined by the Bletchley Declaration, AI safety focuses on evaluating, preventing, and mitigating risks from advanced AI systems. This includes technical safety (improving AI models' internal machinery) and process-based safety (enhancing policies and procedures around AI development).

AISIs are publicly funded research institutions that mitigate risks from frontier AI development. They perform foundational research, develop guidance, and work closely with companies to test models before deployment. This multifaceted approach ensures a virtuous research, testing, and guidance cycle.

Noteworthy Key Players

  • United States: Led by Secretary of Commerce Gina Raimondo, the U.S. is a critical player in launching the AISI network.
  • United Kingdom: With a substantial budget of £50 million per year, the UK AISI is a significant contributor.
  • OpenAI: Collaborating with the U.S. AISI on safety testing and evaluations.

Upcoming Milestones

  • November 2024: Inaugural network meeting in San Francisco to set the agenda and priorities.
  • February 2025: AI Action Summit in Paris to showcase the network's work.

Read the full paper on the latest updates ​here​.

AI Creative Applications

Google Codes with AI

One of the most notable creative applications of AI at Google is its integration into the coding process. In a groundbreaking shift, more than a quarter of all new code at Google is now generated by AI, then reviewed and accepted by engineers. This innovative use of AI significantly boosts productivity and efficiency, allowing engineers to focus on more complex tasks and accelerating the development cycle.

This trend highlights Google's commitment to leveraging advanced technologies and sets a precedent for the tech industry. As AI continues to evolve, more companies are likely to adopt similar practices, leading to a fundamental transformation in software development. The integration of AI in coding underscores the potential for increased efficiency, faster time-to-market for new products, and a greater emphasis on innovation and creativity. If this is not a wake-up call for AI-skeptical coders, it should be. Please get familiar with this landscape, my friends. Times are changing, and your job is changing, too.

Read the full copy of Google's Q3 earnings report CEO Remarks here.

Friends of TheTechMargin

Membership for Women

My friend and collaborator Esther has created a group to nourish your body and mind, and connect with other women.

Holistic Health Newsletter

My friend and collaborator Esther also has a wonderful newsletter you can check out below!

Tip of the Week

Join the Waitlist to Test My App
TheFaceOfAI is an interface tool for AI.
I am just getting started, but here are some of the ways you can work with AI through the app:
  • Design an empowering avatar as a reminder of how unstoppable you are.
  • Generate ideas for your personal or professional life with high-level specificity in a single prompt.
  • Generate perfect prompts you can reuse within TheFaceOfAI or elsewhere in other AI tools.
  • Learn about AI through the interactive learning tool, print, and share your lessons with your team.
  • Generate task-specific templates and getting-started documents for anything you can think of, then visualize the idea!
  • Change your thinking or flip it upside down. With TheFaceOfAI, you can work from images to text in your ideation process and flip back and forth, whatever works for you.
Complete this short survey to be added to TheFaceOfAI waitlist.
" TheTechMargin LLC will never sell or share your data.
Continue
Powered by Typeform

63 Federal Street, Portland, ME 04101
Unsubscribe · Preferences