option
Model parameter quantity
N/A
Model parameter quantity
Affiliated organization
TeleAI
Affiliated organization
Closed Source
License Type
Release time
September 26, 2024
Release time
Model Introduction
TeleChat2, developed by China Telecom, is a large-scale semantic model that excels in interactive dialogue, Q&A, and creative assistance. It supports encyclopedia queries, code generation, and long text creation, effectively providing information and inspiration while demonstrating strong capabilities in logical understanding and managing text coherence.
Swipe left and right to view more
Language comprehension ability Language comprehension ability
Language comprehension ability
Often makes semantic misjudgments, leading to obvious logical disconnects in responses.
5.8
Knowledge coverage scope Knowledge coverage scope
Knowledge coverage scope
Has significant knowledge blind spots, often showing factual errors and repeating outdated information.
6.7
Reasoning ability Reasoning ability
Reasoning ability
Unable to maintain coherent reasoning chains, often causing inverted causality or miscalculations.
5.2
Related model
Qwen2.5-7B-Instruct Like Qwen2, the Qwen2.5 language models support up to 128K tokens and can generate up to 8K tokens. They also maintain multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
GPT-4o-mini-20240718 GPT-4o-mini is an API model produced by OpenAI, with the specific version number being gpt-4o-mini-2024-07-18.
GPT-4o-mini-20240718 GPT-4o-mini is an API model produced by OpenAI, with the specific version number being gpt-4o-mini-2024-07-18.
Gemini-2.5-Pro-Preview-05-06 Gemini 2.5 Pro is a model released by Google DeepMind artificial intelligence research team, using version number Gemini-2.5-Pro-Preview-05-06.
DeepSeek-V2-Chat-0628 DeepSeek-V2 is a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference. It comprises 236B total parameters, of which 21B are activated for each token. Compared with DeepSeek 67B, DeepSeek-V2 achieves stronger performance, and meanwhile saves 42.5% of training costs, reduces the KV cache by 93.3%, and boosts the maximum generation throughput to 5.76 times.
Relevant documents
Google Introduces AI-Powered Tools for Gmail, Docs, and Vids Google Unveils AI-Powered Workspace Updates at I/O 2025During its annual developer conference, Google has introduced transformative AI enhancements coming to its Workspace suite, fundamentally changing how users interact with Gmail, Docs, and Vids. T
AWS Launches Bedrock AgentCore: Open-Source Platform for Enterprise AI Agent Development Here is the rewritten HTML content:AWS Launches Bedrock AgentCore for Enterprise AI Agents Amazon Web Services (AWS) is betting big on AI agents transforming business operations, introducing Amazon Bedrock AgentCore—an enterprise-grade platform enab
Akaluli AI Voice Recorder Enhances Productivity & Focus Efficiently In our hyper-connected work environments, maintaining focus during crucial conversations has become increasingly challenging. The Akaluli AI Voice Recorder presents an innovative solution to this modern dilemma by seamlessly capturing, transcribing,
Spotify increases Premium subscription costs in markets outside the US Spotify is implementing subscription price hikes across multiple international markets just days after reporting underwhelming financial performance. The streaming giant confirmed Monday that Premium users throughout Europe, South Asia, the Middle Ea
Cairn RPG: Easy-to-Learn Tabletop System for New Players Want an exciting gateway into tabletop RPGs that won't overwhelm newcomers? Picture organizing an entire adventure with ten complete beginners in just fifteen minutes - starting from character creation to diving into gameplay with a fresh system. Sou
Model comparison
Start the comparison
Back to Top
OR