GitHub's AI-Driven Future: Copilot Models, Micro Apps, and More!
Full Description
The AI race continues with lots of new updates straight from the GitHub Universe conference! New features from GitHub include: the ability to choose different AI models for GitHub Copilot Chat to use (OpenAI, Claude, Gemini, etc.), Copilot Workspaces reviewing PRs, suggesting code changes, and validating fixes, GitHub AI Models to test, experiment, and build with different AI models (OpenAI, Llama, Mistral, etc.), and GitHub Spark using natural language processing alone to generate āmicro appsā that GitHub will host. In addition to the GH Universe announcements, the October VS Code release has a bunch of new Copilot additions like: Copilot Edits to change multiple files at once, Copilot Chat in a secondary sidebar, and Copilot code reviews before committing to GitHub. Next.jsās caching, which defaulted to very aggressive in the past, has been updated big time in Next.js 15. Now, when devs add a request that fetches external data, theyāll be prompted to either wrap it in a Suspense tag or explicitly mark the module or function with the āuse cacheā directive. This gives devs more fine grained cache control allowing some routes to have dynamic, Suspense-supported data, while others have static, cached data. In bonus news, the open source Flutter community decided to fork the project because it feels Googleās core Flutter team doesnāt have enough resources internally and isn't fast enough at reviewing PRs and implementing new features. āFlockā aims to add the bug fixes, popular community features, and generally be faster and more agile than Flutter. And todayās Fire Starter is about HTTP/3: the newest revision of the HTTP which offers better speed, security, and reliability.