KAIST CS Board RSS : Events ko <![CDATA[[PhD defense] 김태준 3/24 14:00 N1, 701호]]>

]]>
03/07 12:26
<![CDATA[Talk by Dr. Minsuk Chang / Google DeepMind (Feb 26): Cooperative Intelligence of Language Agents]]> - Time: 3-4 pm, Feb. 26 (Wed)
- Location: N1 114 (Offline) / https://us02web.zoom.us/j/87551791416 (Online)
- Language: English
- Host: Juho Kim
Title: Concordia: Advancing the Cooperative Intelligence of Language Agents
Abstract:
This talk introduces Concordia, a framework for developing and evaluating cooperative intelligence in agents built with Large Language Models (LLMs). Building upon the rich history of Agent-Based Modeling (ABM), Concordia empowers researchers to construct Generative Agent-Based Models (GABMs), where LLM-powered agents interact with each other and their environment through natural language. Concordia facilitates the creation of complex, language-mediated simulations of diverse scenarios, from physical worlds to digital environments involving apps and services. By enabling the study of cooperation among LLM agents in challenging scenarios, such as those involving competing interests and potential miscommunication, Concordia aims to advance research on cooperative and social intelligence. This research is critical as we witness the rapid growth of LMs and anticipate the increasing prevalence of personalized agents in our lives. The ability of these agents to effectively cooperate with one another and with humans will be crucial for their successful and beneficial integration into society.
Bio:
Minsuk Chang is a Research Scientist at Google DeepMind, where his work centers on the acquisition of skills and knowledge in both artificial and natural agents. His research investigates the dynamics of learning processes, seeking to elucidate how agents effectively gather information, adapt to novel environments, and expand their behavioral repertoires. Prior to joining Google DeepMind, he contributed to Naver AI Lab&#39;s large language model initiative, HyperClova. He holds a PhD in Computer Science from KAIST.
]]>
03/07 12:26
<![CDATA[[Seminar Notice] February 19(Wen) at 16:00 PM, Yizheng Chen(University of Maryland)]]> o Date and Time: February 19, 2025 (Wen) at 16:00 PM

o Location: Offline (Room 201, N1 Building)

o Speaker : Yizheng Chen(University of Maryland)

■ Title: Benchmarking LLMs for Secure Code Generation

■ Abstract

Models (LLMs) have demonstrated promising capabilities in discovering and patching real-world security vulnerabilities. But how do we determine which LLM-based system performs best?

In this talk, I will explore the challenges of benchmarking LLMs for cyberdefense.

I will begin by presenting our work on evaluating LLMs&rsquo; ability to generate secure code. Notably, we find that results from prior code-generation benchmarks do not translate to LLMs&rsquo; secure coding performance in real-world software projects.

Next, I will discuss a key issue: memorization. LLMs may not be solving security problems from first principles but rather recalling secure solutions they have already seen. Finally, I will discuss future research directions in effectively evaluating

and improving LLMs for cybersecurity applications.

Models (LLM

■ Bio

Yizheng Chen is an Assistant Professor of Computer Science at the University of Maryland. Her research focuses on Large Language Models for Code Generation and AI for Security.

Her recent work PrimeVul has been used by Gemini 1.5 Pro for vulnerability detection evaluation. Previously, she received her Ph.D. in Computer Science from the Georgia Institute of Technology,

and was a postdoc at University of California, Berkeley and Columbia University. Her work has received an ACM CCS Best Paper Award Runner-up, a Google ASPIRE Award, and Top 10 Finalist of the CSAW Applied Research Competition.

She is a recipient of the Anita Borg Memorial Scholarship.

]]>
03/07 12:26
<![CDATA[Feb 6/7: Two HCI/Social Computing talks by Prof. Mark Ackerman from UMich]]> [Workshop Seminar] Professor Mark S. Ackerman from the University of Michigan, School of Information will be joining as part of a workshop with the Phenomenal Data Lab at KAIST (https://phenomenaldatalab.kaist.ac.kr) on the topic of: &ldquo;Beyond Gen AI: Sociotechnical AI Tools for Academic Literature Reviews&rdquo;. He will be going over the basics of sociotechnical approaches to CSCW/HCI, and then early stage findings from the collaborative project will be presented and discussed as examples of how a sociotechnical approach is relevant for a Gen AI (or post-Gen AI) world. Students are especially encouraged to attend.
Date and time: February 6 (Thursday) 1:30 PM-3:00 PM (KST)
Location: N22 1st floor Room 103 (이민화홀)
Language: English
Host: Prof. Tom Steinberger

[Main research seminar] Professor Mark S. Ackerman from the University of Michigan, School of Information will give an in-person talk on sociotechnical approaches to computing. Specifically, he will be reflecting on his several decades of research helping pioneer the sociotechnical study and design of making computing systems useful for humans and organizations, drawing lessons for future research.
Date and time: February 7 (Friday) 10:30 AM - 12:00 PM (KST)
Location: N22 1st floor Room 103 (이민화홀)
Language: English
Host: Prof. Tom Steinberger

]]>
03/07 12:26
<![CDATA[[PhD defense] 신희찬 12/16 11:00 E3-1 4448호]]>

]]>
03/07 12:26
<![CDATA[[KAIST SoC Colloquium]Breaking the Memory Wall: Near-Data Processing for Hyperscale Applications]]> Speaker: Gwangsun Kim
Title: Breaking the Memory Wall: Near-Data Processing for Hyperscale Applications
Time: 16:00, December 9, 2024
Location: E3-1 1501 (Offline)
Language: English
CS966/CS986 URL: /colloquium/
This lecture is an offline lecture. The lecture location is E3-1, Room 1501.
Abstract

The memory wall has long been recognized as a critical challenge in high-performance systems, and it has recently become even more significant due to the exponential growth of machine learning model sizes. Meanwhile, recent advancements in interconnect technology, such as Compute Express Link (CXL), enable scalable memory system designs to address the memory capacity wall. Moreover, by offloading data and computation to CXL memory expanders to realize Near-Data Processing (NDP), the memory bandwidth wall can also be effectively mitigated. However, designing such a system should be done carefully, considering various design aspects that can affect the practicality of the solution.
In this talk, I will discuss key considerations and directions for building a practical NDP system architecture, including general-purpose computing, low-latency host communication, standard compliance, and cost-effectiveness. I will then present our recent work on an NDP architecture called Memory-Mapped NDP (M&sup2;NDP). M&sup2;NDP consists of two components: 1) Memory-Mapped Function (M&sup2;func), which enables low-latency host-device communication by addressing the overhead of conventional ring buffer-based task offloading, and 2) Memory-Mapped &mu;threading (M&sup2;&mu;thread), a general-purpose, cost-effective NDP unit architecture that aims to maximize resource utilization by hybridizing CPU and GPU architectures. Finally, I will briefly outline future research directions based on the M&sup2;NDP architecture.

Bio

Gwangsun Kim is an Assistant Professor in the Department of Computer Science and Engineering at POSTECH. Previously, he was a Senior Research Engineer and Senior Performance Engineer at Arm Inc. He received the B.S. degrees in Electronic and Electrical Engineering and Computer Science and Engineering from POSTECH in 2010, and the M.S. and Ph.D. degrees in Computer Science from KAIST in 2012 and 2016, respectively. He has worked on various areas of computer architecture and systems, including memory systems, parallel architectures, GPU computing, systems for machine learning, near-data processing, networking, deep learning compiler, and simulation methodology. He is particularly interested in designing practical architectures for high-performance and scalable systems.

]]>
03/07 12:26
<![CDATA[[PhD defense] 이근홍 12/11 09:00 N1 601호]]>

]]>
03/07 12:26
<![CDATA[[PhD defense] 이동건 12/11 10:30 E3-1 4420호]]>

]]>
03/07 12:26
<![CDATA[[PhD defense] 하태욱 12/9 11:30 E3-1 3420호]]>

]]>
03/07 12:26
<![CDATA[[KAIST SoC Colloquium] Runtime Protocol Refinement Checking for Distributed Protocol Implementations]]> Speaker: Aurojit Panda
Title: Runtime Protocol Refinement Checking for Distributed Protocol Implementations
Time: 16:00, December 2, 2024
Location: Zoom (Link: https://kaist.zoom.us/j/84101178320?pwd=QnRFVURYbTNNbFZ2ejBTRlB0NzNHdz09)
Language: English
CS966/CS986 URL: /colloquium/
Abstract

Despite significant progress in verifying protocols, services that implement distributed protocols , e.g., Chubby or Etcd, can exhibit safety bugs in production deployments. These bugs are often introduced by programmers when converting protocol descriptions into code. In this talk I will describe a new technique we have been developing to identify these bugs at runtime: Runtime Protocol Refinement Checking} (RPRC). RPRC systems observe a deployed service&#39;s runtime behavior and notify operators when this behavior evidences a protocol implementation bug, allowing operators to mitigate the bugs impact and developers to fix the bug. We have developed an algorithm for RPRC and implemented it in a system called Ellsberg that targets services that assume the asynchronous or partially synchronous model, and fail-stop failures. We designed Ellsberg so it makes no assumptions about how services are implemented, and requires no additional coordination or communication. We have used Ellsberg with three open source services: Etcd, Zookeeper and Redis Raft.

Bio

Aurojit Panda is an assistant professor in the Computer Science department at New York University working on systems and networking. He received his PhD in 2017 from UC Berkeley, where he was advised by Scott Shenker. He has received several awards, including a VMware Early Career Faculty Award, a Google Research Scholar Award, an NSF Career award, best paper awards at EuroSys, SIGCOMM and OSDI, and a EuroSys test of time award.

]]>
03/07 12:26
<![CDATA[[PhD defense] TRIRAT PATARA 12/5 14:00 E3-1 2452호]]>

]]>
03/07 12:26
<![CDATA[[PhD defense] MAHE ZABIN 12/5 10:00 E3-1 4420호]]>

]]>
03/07 12:26
<![CDATA[[PhD defense] 장명재 12/5 17:00 N1 701호]]>

]]>
03/07 12:26
<![CDATA[[PhD defense] 김종명 12/5 16:00 E3-1 4420호]]>

]]>
03/07 12:26
<![CDATA[[PhD defense] 이선재 12/3 09:30 E3-1 4420호]]>

]]>
03/07 12:26