SCASA - Southern California Chapter, American Statistical Association

SCASA - Southern California Chapter, American Statistical Association

Civic and Social Organizations

Los Angeles, CA 23 followers

Southern California chapter of the American Statistical Association

About us

The Southern California Chapter of the American Statistical Association (SCASA) is a non-profit organization that exists for the purpose of promoting unity within the community of statisticians, especially within the Southern California area, and for the purpose of contributing to statistical education within the professional community and the general public. By its efforts, the Chapter hopes to increase the contribution of Statistics to human welfare everywhere. The SCASA's mission is to support excellence in statistical practice, research, and meetings, work for the improvement of statistical education and promote the proper application of Statistics. The membership in SCASA is open to all individuals in all fields related to Statistics. The SCASA members have the benefit of being actively involved in organizing local events and activities and voting in elections of the SCASA board. If also members of the national American Statistical Association, the SCASA members have the right to hold a SCASA office. Regular membership dues are $12 per year and $6 for full-time students and retirees. The SCASA members are eternally grateful to organizations that sponsor some of the events or contribute books and goodies to give out as door prizes. Some of the main and most faithful supporters in the previous years included Amgen, SAS, JMP, Salford Systems, and CRC Press.

Website
https://community.amstat.org/scasa/home
Industry
Civic and Social Organizations
Company size
11-50 employees
Headquarters
Los Angeles, CA
Type
Nonprofit

Locations

Updates

  • DSML trend: AI-based scientists win 2024 Nobel Prize in Chemistry The 2024 Nobel Prize in Chemistry was awarded to three researchers for their groundbreaking work in protein structure prediction and design using artificial intelligence (AI): ·     Demis Hassabis (British): Co-founder and CEO of Google DeepMind ·     John M. Jumper (American): Director at Google DeepMind ·     David Baker (American): Professor of Biochemistry at the University of Washington Hassabis and Jumper, working together at Google DeepMind, developed AlphaFold, an AI tool that revolutionized protein structure prediction. In 2020, AlphaFold solved a decades-old problem: predicting the three-dimensional structure of proteins from their amino acid sequences. Since then, AlphaFold has been used to predict the shapes of all known proteins, and their latest version, AlphaFold 3, extends these predictions to DNA, RNA, and small molecules like ligands, which are vital for drug discovery. David Baker, meanwhile, pioneered AI tools for protein design and structure prediction, including the widely-used Rosetta family of tools. He also created ProteinMPNN, an open-source AI tool that aids researchers in discovering unknown proteins and designing new ones. Recently, Baker’s lab developed custom molecules capable of precisely targeting and eliminating disease-associated proteins in living cells. That’s my take on it: This award underscores the growing role of AI in scientific research, marking the second Nobel Prize awarded for AI-related work in 2024, following the Nobel Prize in Physics for neural network research. The recognition of AI in both chemistry and physics has ignited discussions about how computational methods are reshaping traditional scientific disciplines. In response to the Nobel Prize in Physics, psychologist Gary Marcus raised concerns, noting that while Geoffrey Hinton has significantly influenced machine learning, it remains unclear how his work advanced physics specifically. He referenced Steve Hanson, arguing that the Nobel committee might not fully grasp the history of neural networks. However, Marcus sees the Chemistry Prize for Hassabis and Jumper as a clear and well-deserved win. This complicated issue, which involves the tension between the connectionist and the symbolic approaches to AI, necessitates deeper discussions. Links: https://lnkd.in/gXmV43Ah https://lnkd.in/gt3PHBQa

    Two Nobel Prizes for AI, and Two Paths Forward

    Two Nobel Prizes for AI, and Two Paths Forward

    garymarcus.substack.com

  • DSML trend: AI pioneers won 2024 Nobel Prize in Physics The Royal Swedish Academy of Sciences has awarded the 2024 Nobel Prize in Physics to John J. Hopfield and Geoffrey E. Hinton "for foundational discoveries and inventions that enable machine learning with artificial neural networks." Often referred to as the "godfather of AI," Hinton’s contributions laid the groundwork for the machine learning systems that are transforming fields such as medical diagnostics by enabling faster and more accurate decision-making. His pioneering work has driven the rapid development of modern machine learning. The prize, worth 11 million Swedish kronor, will be shared equally between Hopfield and Hinton. That’s my take on it: Interestingly, Hinton is not a physicist but a cognitive psychologist and computer scientist. His contributions to AI date back to the 1970s. At that time, UCSD researcher David Rumelhart, who was trained in both psychology and mathematics, sought to improve the perceptron system by introducing the backpropagation algorithm into multilayer networks. This algorithm, also known as backprop, adjusts the weights of connections between neurons to minimize the difference between predicted and actual outputs. Initially, the algorithm failed when weights were set to zero, but Geoffrey Hinton’s insight saved the day. He suggested using random weights instead of zero, and once this change was implemented by the PDP research group, the algorithm began to function properly. The rest is history—and I’m proud of psychology’s role in it! Link: https://lnkd.in/dyUfXAtg

    The Nobel Prize in Physics 2024

    The Nobel Prize in Physics 2024

    nobelprize.org

  • DSML trend: Meta introduces AI-enabled video generator Movie Gen Meta has recently introduced Movie Gen, a cutting-edge AI-powered video generation tool that marks a major advancement in AI-generated media. Users can upload a photo of themselves and pair it with a text prompt to create a personalized video that captures human identity and movement. Movie Gen also allows users to edit existing videos or images through text commands, enabling precise modifications such as changing styles, transitions, or adding new elements. For instance, with the command “Transform the lantern into a bubble that soars into the air,” the lantern in the video seamlessly transforms into a bubble, with a strikingly realistic effect. That’s my take on it: As an experimental product, Movie Gen currently has limited functionality. It can produce high-definition videos up to 16 seconds long at 16 frames per second based on text prompts. For videos at the more standard 24 frames per second, the maximum duration is 10 seconds. While this falls short of the capabilities of OpenAI’s Sora in terms of video length, the trend is clear: conventional video production and content creation roles are likely to be disrupted by such advances. We cannot ignore this trend and do business as usual! Link: https://lnkd.in/etWHm_Mb

    Meta Movie Gen

    Meta Movie Gen

    ai.meta.com

  • DSML trend: Experts challenge Google’s claim that AlphaChip AI outperforms humans Last week, Google introduced its AlphaChip AI, a reinforcement learning-based method for designing chip layouts. AlphaChip aims to significantly reduce the time needed to create chip floorplans and enhance their performance, power efficiency, and area optimization. Typically, designing a floorplan for a complex chip, like a GPU, takes about 24 months if done manually by humans. Even for less complex chips, the process can still take several months, costing millions due to the large design teams involved. Google claims that AlphaChip can speed up this process, producing a chip layout in just a few hours. However, independent researchers have recently raised concerns about whether Google has demonstrated that its AI can truly outperform expert human designers or existing commercial software. In 2023, Andrew Kahng, a professor at the University of California, San Diego, retracted his Nature commentary that initially praised Google’s work, noting that replication was necessary. After running a public benchmarking effort to replicate Google’s method, Kahng found that the AI did not consistently surpass human experts or conventional algorithms. That’s my take on it: Agreed! They should bring in an independent company to conduct a randomized experiment to validate their claims. Google is not alone. Cadence has developed AI-based tools like Cadence.AI, which helps automate PCB design and chip design flows. Similarly, NVIDIA has conducted research on using reinforcement learning for chip floor planning. While these AI-driven approaches show promise, their effectiveness compared to traditional methods is still being evaluated by the industry. Link: https://lnkd.in/gnuASE5G

    Google says its AI designs chips better than humans – experts disagree

    Google says its AI designs chips better than humans – experts disagree

    newscientist.com

  • Data Science Seminar Ethics in the Age of AI and Big Data: Navigating the Challenges of Emerging Technologies (Panel Discussion) Panelists: Daniel Cheung and Chong Ho Alex Yu Date: October 4 (Friday), 2024 Time: 11:00 am Hawaiian Time Zoom: https://lnkd.in/gWzXgeB3 Abstract: As AI and big data continue to revolutionize the world, they bring forth complex ethical dilemmas that demand urgent attention. This panel discussion will explore critical issues at the intersection of AI and ethics, including the implications of data labeling, bias in large language models, misuse of AI in academics, and many others. Join us as we engage in a thought-provoking dialogue on how to navigate the ethical landscape of AI and big data, ensuring that innovation aligns with our collective values and moral principles. Bio: Prof. Daniel Cheung earned his Ph.D. from Indiana University, Bloomington, and has been appointed to the University of Dayton since then. He publishes primarily in business ethics and philosophy of religion. He is currently planning a book on greed in the contemporary capitalistic world. In recent years, he has become interested in big data ethics and given presentations on the topic. (He asked ChatGPT to give a bio of himself but that turned out to be full of false statements. On the other hand, Google Gemini declined to give a response, admitting that it did not know much about Prof. Cheung!) Chong Ho (Alex) Yu is a Professor and the Program Director of Data Science at the College of Natural and Computational Sciences at Hawaii Pacific University (HPU). He also contributes to HPU’s AI Task Force. His research spans a wide range of topics, including exploratory data analysis, data visualization, data mining, STEM education, the psychology of religion, data ethics, and the philosophy of science.

    Join our Cloud HD Video Meeting

    Join our Cloud HD Video Meeting

    us02web.zoom.us

  • DSML trend: AI research shifts from academia to industry According to an article published in Nature on September 18, 2024, the rapid advancement of AI has shifted much of the cutting-edge research from academia to private industry, largely due to significant funding advantages. Specifically, a growing portion of AI research is no longer published in leading peer-reviewed scientific journals. The percentage of research articles with at least one industry co-author rose from 22% at top AI conferences in 2000 to 38% in 2020. Industry’s share of the largest and most capable AI models jumped from 11% in 2010 to 96% in 2021. However, in China, academic institutions continue to play a crucial role in AI research and development. Between 2019 and 2021, China’s share in AI research more than doubled. The gap between the United States and China is narrowing rapidly, with six of the top AI research institutes now located in China (see the list below). 1 Chinese Academy of Sciences (China) 2 Harvard University (United States) 3 Beijing University (China) 4 Tsinghua University (China) 5 Zhejiang University (China) 6 Massachusetts Institute of Technology (United States) 7 Max Planck Society (Germany) 8 Helmholtz Association of German Research Centers (Germany) 9 University of Science and Technology of China (China) 10 Shanghai Jiao Tong University (China) That’s my take on it: Uncle Sam must allocate more AI research funding to universities. I need course releases and funding to hire research assistants! Link: https://lnkd.in/eMyPN3yQ

    Rage against machine learning driven by profit

    Rage against machine learning driven by profit

    nature.com

  • DSML trend: Qwen 2.5 might outperform Llama 3 Two days ago, China’s tech giant Alibaba Cloud announced its flagship AI product, Qwen 2.5, with models that appear to be highly competitive in the current landscape of large language models (LLMs). The Qwen 2.5 series offers a range of models with varying sizes, with parameters ranging from 0.5 billion to 72 billion. It was rained on a dataset of up to 18 trillion tokens. Alibaba asserts that their largest model, Qwen2.5-72B, outperforms competitors like Meta AI’s Llama-3.1-70B and Mistral-Large-V2 on benchmarks such as MMLU. Even smaller versions like Qwen2.5-14B and Qwen2.5-32B reportedly match the performance of larger models from other companies. Since its introduction in April 2023, the Qwen model series has gained significant traction. Specifically, there are over 40 million downloads of Qwen models across platforms like Hugging Face and ModelScope. It inspired the creation of over 50,000 models on Hugging Face. That’s my take on it: While I admire Alibaba Cloud’s achievements, I am uncertain whether this high ranking position in AI is sustainable. First, many of China’s AI models rely on open-source frameworks developed by their U.S. counterparts. Should these U.S. companies choose to make their models proprietary, as OpenAI has done, it could pose challenges for China's AI progress. Second, regardless of how advanced the software becomes, AI systems are still heavily dependent on high-performance hardware, particularly GPUs. The current U.S. restrictions on the sale of advanced GPUs to China significantly hinder its AI development. Furthermore, as with any new AI model, independent verification by researchers and organizations is necessary to validate performance claims. While benchmarks can provide insights, they don’t always reflect real-world performance across various applications. Link: https://lnkd.in/gA5qRAix

    Alibaba's Qwen 2.5 AI models are gunning for Llama 3's crown in latest benchmark

    Alibaba's Qwen 2.5 AI models are gunning for Llama 3's crown in latest benchmark

    the-decoder.com

  • Recently my colleague and I published the following article regarding the positive impact of AI on higher education. The full text can be accessed via the DOI below. Thank you for your attention. Hanson, J., & Yu, C. H. (2024). Using AI-enabled tools to support minority students’ success in higher education. International Journal of Learning and Development, 14(3). https://lnkd.in/gREpeaWG Abstract Generative AI and large language models (LLMs) are transforming workplace literacy practices in the fourth industrial revolution. This study explored how master's students in educational leadership and teacher education programs at a minority-serving university in a low SES urban metroplex in the south-central USA perceive and experience using LLMs and AI-assisted software. The focus was on how these tools support their learning and help them demonstrate knowledge. Key themes include equitable access to AI tools, diverse applications in academic and professional contexts, building student confidence through support and training, using AI to enhance academic readiness and skills, identifying individual needs, enabling higher order thinking opportunities, and promoting ethical use and academic integrity. By addressing various issues of AI usage, institutions can better support underserved students in using disruptive technologies, contributing to their academic success and professional preparedness. Faculty must ensure students' needs are met before requiring the use of new technologies like generative AI and AI-enabled tools.

    International Journal of Learning and Development

    International Journal of Learning and Development

    macrothink.org

  • DSML trend: OpenAI releases o1that can solve complex problems Yesterday (September 12, 2024) OpenAI released the o1 series of models for ChatGPT, introducing two versions: o1-preview and o1-mini. This launch marks a significant step in AI development, particularly for complex reasoning tasks. Overview of o1 Models The o1 models are designed to enhance reasoning capabilities, allowing the AI to spend more time "thinking" before responding. This approach aims to improve performance on intricate problems, especially in fields like science, coding, and mathematics. For instance, in evaluations, the o1 model scored 83% on an International Mathematics Olympiad qualifying exam, compared to just 13% for its predecessor, GPT-4o. Further, the o1 model reached the 89th percentile in Codeforces competitions, indicating its strong capabilities in programming. Further, the o1 models utilize a multi-step reasoning process, similar to human problem-solving, which helps in breaking down complex tasks. That’s my take on it: Several YouTubers tested the o1 model with trick questions like “after three hours, how many pieces of ice will remain in my hand?” and “how many letters ‘r’ are in the word ‘strawberry’?” Unlike its predecessor, the new model successfully provided the correct answers. Moreover, the o1 model has significantly improved in scientific reasoning and computation. As noted earlier, the o1-preview model achieved 83% accuracy on the International Mathematics Olympiad (IMO) qualifying exam and ranked in the 89th percentile in Codeforces competitions. This could help students enhance their analytical and programming skills, potentially democratizing access to advanced STEM education. We cannot ignore this trend. Educators have to be proactive! Link: https://openai.com/o1/

    Introducing OpenAI o1

    Introducing OpenAI o1

    openai.com

  • DSML trend: US, UK, and EU co-sign international treaty of AI development On September 5, 2024, the United States, United Kingdom, European Union, and several other countries signed the world's first legally binding international treaty on artificial intelligence, called "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law". This treaty was developed by the Council of Europe and aims to regulate the use of AI while promoting responsible innovation, which aims to establish a "Conference of Parties" to promote cooperation and information exchange among signatories. The treaty applies to all AI systems except those used in national security or defense, though it still requires these activities to respect international laws and democratic principles. It requires risk monitoring, documentation, and testing of AI systems. That’s my take on it: As the name implies, it sets forth seven key AI principles for implementation by signatories, focusing on protecting human rights, democracy, and the rule of law. The treaty emphasizes that AI systems must align with human rights principles and uphold democratic values. Based on the current geopolitical landscape and the principles emphasized in this treaty, it's unlikely that semi-democratic and non-democratic nations like Russia and North Korea would join this particular AI agreement. In the future there might be a two-tiered system of AI in the international arena: 1. Democratic bloc: Led by democracies, emphasizing human rights and democratic values in AI development and deployment. 2. Authoritarian bloc: focusing more on state control and national security concerns in AI governance. We've already seen similar divides in other technology governance areas, such as internet regulation and data privacy laws. The AI treaty situation may follow a similar pattern of fragmentation. Link: https://lnkd.in/eVgsa8q9

    US, Britain, EU to sign first international AI treaty

    US, Britain, EU to sign first international AI treaty

    reuters.com

Similar pages

Browse jobs