TVM、MXNet、XGBoost 作者,CMU 机器学习和计算机系助理教授陈天奇,他的工作,让各类硬件上原生部署任意大型语言模型成为可能,算力,还是问题么?Enjoy

Q1:请您先聊聊最近研究的方向,或者感兴趣的工作吧。
陈教授:我的研究风格是问题导向。过去 5 年我们一直致力于解决的问题是如何让机器学习面向更多人群,同时可以在更多设备环境下运行。我们最近的研究重点,一是集中在机器学习系统,不光要解决算法,还要让系统工程本身可以更快速迭代;二是做更多开源软件,令我们的工作可以在开源社区,让大家直接尝试我们的研究成果,并通过这个方式获得工业界和其他领域的反馈。 CXO UNION-CXO联盟(cxounion.cn)
随着 Generative AI 和大模型的兴起,我们也希望结合大模型和过往的积淀,探索一些新的方向。
大模型部署最近的进展也挺多。机器学习编译也从之前鲜受关注到目前的 PyTorch,各大厂商都开始逐渐尝试这个方向,整个领域处于不可预料的状态。就好像当年深度学习刚起来,大数据的浪潮似乎就要过去,不知如何是好的状态。如今又是一个不确定的时代,却是一件好事。
Q2:大语言模型出来之后,您最新的工作主要是什么?
陈教授:过去 5 年有一个集中的方向是是机器学习编译。机器学习工程会成为越来越大的问题。为了在更高效的设备上运行,我们需要建设可重复使用的架构,而不需要在每个硬件平台上重做系统工程。
针对大语言模型内存损耗大等特性,我们最近一个工作的核心就是如何利用机器学习编译技术,令机器学习在部署、训练和本身的支持都可以变快。基于这个工作,MAC 的一系列项目,令我们可以将一些语言模型部署在手机端,移动端,或者通过 WebAssembly 和 WebGPU 技术部署在浏览器中,同时也可以在各类显卡,包括英伟达,AMD 和苹果上都跑得更快。 CXO UNION-CXO联盟(cxounion.cn)
当语言模型可以在多设备上完成部署,就可以基于这项技术,打造更多开放的解决方案,让部署开放模型的代价可以降低。
Q3:您做过很多有趣的开源工作,不论早期的 Apache MXNet,到 XGBoost、TVM,您是如何把这些工作串连起来的?机器学习十几年的发展,从算法和系统层面,您有些什么感受?
陈教授:机器学习在过去几年发生了巨大的变化。从当初的算法建模为主,推导支持向量机和线性模型;到大数据落地后被应用到广告和推荐,开始思考机器学习大规模化;深度学习时代开始后,数据和算力开始进入大家的考量。这就意味着需要搭建良好的机器学习系统不可或缺。这也是为什么我开始转向机器学习系统领域的研究,目标也是算法、算力和数据的综合考量,在这个过程中寻找解决问题的动力,真正推动机器学习领域的前进。
Q4:英伟达在大语言模型的发展下获益明显,您的工作势必让硬件竞争更多元化。如何看待类似英伟达公司和开源解决方案这两种路径的发展关系呢?
陈教授:我认为两者不是竞争关系,我们也做很多英伟达显卡的优化工作。我们的这套解决方案很多的时候是可以利用厂商原生库的。英伟达现在的确在多方面都处于领先地位,我们没有打算一定要超过英伟达,英伟达也不是在所有场景下都完美无瑕。我们感兴趣的是如何让整个领域更快地向前跑,而无论推动的人是谁。
所以这不是一个必须 A 和 B 之间比较的关系,在算力依然处于相对紧缺的阶段,拥有更多可能性让大家一起向前走,才是我们想看到的。 CXO UNION-CXO联盟(cxounion.cn)
Q5:能具体讲讲英伟达在哪些领域相对是比较领先的?
陈教授:总的来说是硬件以及编程模型。靠传统直堆 Silicon 的方式已经很难通过不改编程模型的方式来解决了。随着新卡的发布,编程模型不能直接迁移。
大家目前需要的是一套在新硬件环境和新模型出现的时候可以快速地迭代的解决方案去适配。过去十年为何深度学习发展这么快?是因为深度学习建模本身的门槛被降得非常低。未来 5 到 10 年,机器学习工程会变得很重要。对于每一套可能的硬件模型数据组合,都需要特别的工程化解决方案。我们感兴趣的目标就是如何让工程迭代的速度变快。在硬件的迭代上,英伟达目前还是无人能出其右,所以未来如何对新硬件进行更好的支持,是一个很有趣的话题。
不过现在有一个不同,之前是其他厂商都不行,等于 0。现在通过我们的解决方案,AMD 从不能跑到可以跑,跑得还不错,从其他厂商来看,也是一个进步。从 0 到 1 的过程是难能可贵的。
Q6:既然深度学习的门槛变得很低,那可预见的未来,大语言模型的机器学习工程化门槛也会降得很低呢?
陈教授:就好像当初我做 XGBoost 之前,数据科学都很复杂,现在基本调一下效果就不错了。我们希望能够降低机器学习工程的门槛,目前也取得了一些成果。但能降到多低,什么时候降,与研究的方向,研究的投入,特别在开源社区大家的共同努力是不可分割的。 CXO UNION-CXO联盟(cxounion.cn)
Q7:在这个方向上,开源界共同努力可能会产生更大激励,而闭源的领先者,更希望作为专业技术”藏起来”?
陈教授:不说模型,而从机器学习工程基础设施而言,开源闭源都会有促进作用。学术界如果要跟上时代,继续往前走。哪怕是建模,也会有完全闭源的,也有向 LLaMa 至少开放的。我个人比较相信开源,开源社区迭代会比较快。
Q8:模型在手机上跑,还存在什么挑战?
陈教授:我们的解决方案可以直接在手机上跑 7B 的模型, 7B 到 3B 其实没有什么压力,解决方案肯定会成熟。还有看你要不要跑,或者要跑多大的模型,多大的模型它是不是有用,这是另一个问题。
3B 我们现在的解决方案直接可以跑,去 MAC,我们有一个 APP Store 的 APP下载可以玩一下。7B 可能手机会发热,能耗可能会有点问题,高端机可以跑。接下来的问题就是如何与垂直应用整合,以及有没有必要在手机端跑,因为平板,笔记本都可以。在手机端跑是因为大家关心数据不要外传。
我们现在的 MAC 的解决方案,可以让大家在手机上跑,可以在平板上跑,可以在 Apple 的笔记本上面跑,或者在浏览器里面跑,当然也可以在服务器上面跑,相对来说灵活度会比较大。
Q9:既然您的方案让大家在手机上就可以跑模型,那这条路径和英伟达等大厂的区别在哪里呢?
陈教授:为什么要在手机上跑?为什么要在端侧跑?因为我认为有不同应用场景。假设打游戏,如果和 NPC 每说一句话都要付一毛钱,虽然也不是不行,但如果可以直接在某些场景直接在本机就可以完成不错的任务,我想应该是喜闻乐见的。 CXO UNION-CXO联盟(cxounion.cn)
譬如要有得力的私人的助手,就必须最大限度披露个人信息,你是否愿意把这些内容发到第三方?在本机是否安全性更强?还有个性化,如何让语言模型更懂你?这些方向都是可以演变出不同形态的应用。
电脑刚发明的时候曾有个论断,说世界上只要有八台超级计算机,在几个国家的实验室里就可以满足全球的计算需求,但个人电脑就出现了。如果模型的门槛能进一步降低,是否会出现 Personal AI 的时代?这也是我们想实现的方向。
Q10:如果有几万张 A100 在训练,从你角度看是主要集中在哪些问题需要有经验的基础架构师和系统的人去解决呢?
陈教授:因为在学校里没有大规模训练的经验,我的感受是首先机器学习本身的基础架构处于起步阶段,和原来数据的基础架构是不太一样的。要很多 GPU 一起长时间工作,如何将不同硬件的利用率提高并利用好,硬件本身优化也是一个跨层级的过程,涉及到硬件工程和系统级别的,和模型级别的一些联动,这其中一定有需求。机器学习系统领域,大家也开始越来越关注,所以说这方面人才也会来越来越多。我也希望降低这个领域的门槛,比如通过机器学习编译降低反复劳动。工程部署的需求一直存在,只是现在重视程度提高了。我们的核心体感,是这个领域对人才和各方面的需求很强,对于机器学习和工程部署的从业人员自然是件好事。我们也一直在思考如何通过一些自动化技术实现四两拨千斤。
Q11:英伟达,AMD 以及苹果都有自己的架构,跑起来情况有什么区别?
陈教授:从整个生态而言,目前相对还是英伟达最好。AMD 的主要问题是软件不行,不同的厂在软件上投入的精力完全不同。从硬件从模型到应用整个流程都搭起来,从模型到基础到内存优化,甚至一开始建模的相关的基础设施都要搭起来。通过几年的努力,目前我们可以基于编译搭建一个较为兼容的基础设施。之前我们也做过 AMD 的尝试,比如旗舰级游戏卡 7900 XT,大概可以跑到 4090 的 80% 的状态。我们希望通过自动化来解决更多软件问题,我们现在可以跑多卡,比如两块游戏卡起来可以跑 70B 的,相当于最大能跑到 LLaMa,可以玩各种各样的硬件,还是蛮激动人心的。
苹果在架构层面具有领先性,特别是 Ultra,内存很大。机器直接跑 LLaMA 模型,最简单的方式就是买一个苹果最新 M2 的笔记本。我们的解决方案在苹果上也可以做。
软件问题需要大家的努力,只要有基础架构,问题也可以解决。当然还是要工程化。我们的解决方案没有限制于推理侧,只是需要打开一些思路。 CXO UNION-CXO联盟(cxounion.cn)
Q12:你做的一些工作,像 XGBoost,影响力都非常大,你是怎么挑选研究问题的呢?
陈教授:我们比较关注的是如何将所有的工作整合在一起。机器学习工程化的挑战不在于解决一个具体的问题,而是在于有 10 个解决方案,这个用来做变形,另一个用来做 Sparsity,再一个用来做 batching,可能还会有其他解决方案……这就是深度学习发展特别快的原因,因为其软件工程模块化程度相当高。
做 ResNet,不用关注什么是做检测,因为自然把检测的头和 ResNet 的 backbone 接起来就可以用了。做优化的人不用关注 ResNet 是什么,只要把优化器写好,ResNet 拿过来接一接就可以。机器学习建模如今已经模块化,复用性强,但机器学习工程暂时还未够火候。所以我们最近关注的重点是设计一套基础设施解决各类问题。
Q13:有人说您是一个机器学习编译的全新学科的一个创造者,您对这点怎么看?
陈教授:这也不算是一个全新学科,我们在编译本身有很多积淀和投入。我个人的研究方式属于问题驱动,当时的想法是探索解决机学习工程问题要用什么方法,那么多硬件后端,如何以最少的能力撬动这个领域?我们觉得自动化是必须之路,编译工程就是其中一条路径。编译本身的定义也在不断演进,我们最新的解决方案可以整合手工方案和自动方案,加速工程迭代。 CXO UNION-CXO联盟(cxounion.cn)
Q14:基于树的模型的前景如何?
陈教授:基于树的模型目前还是很多行业必不可少的技术解决方案,数据科学家排名前五的工具,XGBoost 依然位列其中。未来每一个方向上的技术都会需要树的模型,在可预见的未来还是非常重要的,特别在于表格数据,金融和其他领域,使用的还是很多的。
Q15:在研究过程当中您有没有遇到一些特别的挑战?
陈教授:挑战一直存在,科研没有标准答案。尝试也好,失败也好,工业界也好,学术界也好,都会有各种各样往前的目标,都比较卷吧(笑)!但我们很享受能够参与推动这个领域的过程。
Q16:您怎么去判断您的研究方向是一条相对正确的道路?
陈教授:判断不了,只有相信。乔布斯有句话:The journey is the reward. 很多时候目标不是最终的奖杯,而是走的过程。 CXO UNION-CXO联盟(cxounion.cn)
至于说一件事情正确与否,譬如 12 年之前我开始做深度学习,属于一个非常正确又错误的决定,当时的目标是利用深度学习算法解决 ImageNet 的问题,结果两年半以后,没有任何结果,但是经验积累下来,可以沿用到未来要做的其他内容。关键还是”做觉得有趣的事”。
Q17:您觉得您自己是一个很卷的人吗?
陈教授:做有趣的事情就不存在卷,主要是享受做的过程(笑)。
参赞生命力
你觉得什么是科技生命力?
在不确定性中的探索,在前进的过程中,会发现很多有趣的惊喜,这是科技具有生命力印证
—— 陈天奇教授机器学习系和计算机系卡内基梅隆大学

翻译:
Talk to Professor Chen Tianqi: From 0 to 1, believe the interesting things
TVM, MXNet, XGBoost author Tianqi Chen, assistant professor in CMU’s Department of Machine Learning and Computing, his work makes it possible to natively deploy any large language model on all kinds of hardware, computing power, or the problem? Enjoy
Q1: Please talk about your recent research direction, or the work you are interested in.
Professor Chen: My research style is problem-oriented. The problem we’ve been working on for the past five years is how to make machine learning accessible to more people and run on more devices. Our recent research focuses on machine learning systems, not only to solve the algorithm, but also to make the system engineering itself more iterative; The second is to do more open source software, so that we can work in the open source community, let people directly try our research results, and through this way get feedback from industry and other fields. CXO UNION-CXO联盟(cxounion.cn)
With the emergence of Generative AI and large models, we also hope to explore some new directions by combining large models and past accumulation.
Large model deployments have also made a lot of progress recently. Machine learning compilation has also received little attention from the previous PyTorch, and major manufacturers have begun to gradually try this direction, and the entire field is in an unpredictable state. Just like when deep learning had just risen, the wave of big data seemed to be about to pass, and I did not know how to be in a good state. These are uncertain times again, but that is a good thing.
Q2: After the big language model came out, what is your latest work?
Professor Chen: In the past five years there has been a focus on machine learning compilation. Machine learning engineering is going to be a bigger and bigger problem. To run on more efficient devices, we need to build reusable architectures without reengineering systems on every hardware platform.
In view of the large language model memory consumption and other characteristics, the core of our recent work is how to use machine learning compilation technology, so that machine learning in the deployment, training and support can be faster. Based on this work, we have a series of projects on the MAC that allow us to deploy language models on the phone, on the mobile side, or in the browser through WebAssembly and WebGPU technologies, as well as on all kinds of graphics cards, including Nvidia, AMD and Apple. CXO UNION-CXO联盟(cxounion.cn)
When language models can be deployed on multiple devices, more open solutions can be built on this technology, and the cost of deploying open models can be reduced.
Q3: You’ve done a lot of interesting open source work, from the early Apache MXNet, to XGBoost, TVM. How do you connect the dots? How do you feel about the development of machine learning in more than ten years, from the algorithm and system level?
Prof Chen: Machine learning has changed dramatically in the last few years. Based on the original algorithm modeling, support vector machine and linear model are derived. After big data is applied to advertising and recommendation, we start to think about large-scale machine learning. This means that a well-built machine learning system is essential. This is why I began to turn to the field of machine learning system research, the goal is also the comprehensive consideration of algorithms, computing power and data, in the process to find the motivation to solve problems, and truly promote the advancement of machine learning field.
Q4: Nvidia has benefited greatly from the development of large language models, and your work is bound to diversify the hardware competition. How do you see the relationship between two paths like Nvidia and open source solutions?
Professor Chen: I don’t think the two are competitive, and we also do a lot of Nvidia graphics card optimization work. Our solution can take advantage of vendor native libraries in many cases. Nvidia is really leading in many ways right now, and we don’t intend to necessarily surpass Nvidia, and Nvidia isn’t perfect in all scenarios. We are interested in how to move the whole field forward faster, regardless of who is pushing it.
Therefore, this is not A relationship that must be compared between A and B, in the stage where computing power is still relatively scarce, there are more possibilities for everyone to move forward together, which is what we want to see. CXO UNION-CXO联盟(cxounion.cn)
Q5: Can you talk about what areas NVIDIA is relatively leading in?
Prof. Chen: It’s the hardware and the programming model in general. The traditional Silicon approach has become difficult to solve without changing the programming model. As new cards are released, the programming model cannot be directly migrated.
What we need now is a set of solutions that can quickly iterate as new hardware environments and new models emerge. Why has deep learning advanced so fast in the past decade? Because the threshold for deep learning modeling itself has been lowered very low. In the next five to 10 years, machine learning engineering will become important. For each possible combination of hardware model data, a specially engineered solution is required. The goal we are interested in is how to make engineering iterations faster. In the iteration of hardware, Nvidia is still no one can do it right, so how to better support the new hardware in the future is a very interesting topic.
But now there is a difference, before the other manufacturers are not good, equal to 0. Now through our solution, AMD can run from not running to running, running well, from other manufacturers, it is also a progress. The process of going from 0 to 1 is valuable.
Q6: Since the threshold for deep learning has become very low, will the threshold for machine learning engineering for large language models also be very low in the foreseeable future?
Professor Chen: Just like before I did XGBoost, data science was very complicated, and now the basic adjustment effect is good. We want to lower the bar for machine learning engineering, and we’ve had some success. But how low it can be lowered, and when it can be lowered, is inseparable from the direction of research, the investment in research, especially the joint efforts of everyone in the open source community. CXO UNION-CXO联盟(cxounion.cn)
Q7: In this direction, there may be greater incentives for the open source community to work together, while the closed source leaders prefer to “hide” as expertise.
Professor Chen: Not to mention models, but from the perspective of machine learning engineering infrastructure, open source closed source will have a catalytic effect. If academia is to keep up with The Times, move on. Even for modeling, there will be completely closed source, and some will be at least open to LLaMa. I personally believe in open source, and the open source community will iterate faster.
Q8: What are the challenges of running models on mobile phones?
Professor Chen: Our solution can run the 7B model directly on the mobile phone, 7B to 3B is actually no pressure, the solution will definitely mature. And see if you’re going to run, or how big of a model you’re going to run, how big of a model it’s going to be useful, that’s another question.
3B Our current solution can directly run to MAC, we have an APP Store APP download can play. 7B May be hot phone, energy consumption may be a little problem, high-end machine can run. The next question is how to integrate with vertical apps, and whether it is necessary to run on the mobile side, because tablets and laptops can be used. It’s on the phone because people care about keeping their data private.
Our current MAC solution allows you to run on your mobile phone, you can run on the tablet, you can run on the Apple laptop, or run in the browser, of course, you can also run on the server, relatively speaking, the flexibility will be relatively large.
Q9: Since your solution allows people to run models on their mobile phones, what is the difference between this path and Nvidia and other big manufacturers?
Professor Chen: Why do you run on your phone? Why are you running on the end side? Because I think there are different applications. If you play a game, if you have to pay a dime for every word you say to the NPC, although it is not impossible, but if you can directly complete a good task in some scenes directly on the machine, I think it should be happy to see.
For example, if you want to have a good personal assistant, you must disclose personal information to the maximum extent, are you willing to send these contents to a third party? Is it more secure locally? And personalization, how do you make the language model understand you better? These directions are applications that can evolve into different forms.
When computers were first invented, it was said that the world could meet its computing needs with only eight supercomputers in laboratories in a few countries, but then the personal computer came along. If the threshold of models can be further lowered, will there be an era of Personal AI? This is what we want to achieve. CXO UNION-CXO联盟(cxounion.cn)
Q10: If there are tens of thousands of A100s in training, from your point of view, what are the main problems that need to be solved by experienced infrastructure architects and systems people?
Professor Chen: Because I have no experience in large-scale training in school, my feeling is that first of all, the infrastructure of machine learning itself is in its infancy, and the infrastructure of the original data is not quite the same. Many Gpus work together for a long time, how to improve the utilization of different hardware and make good use of it, hardware itself optimization is also a cross-level process, involving hardware engineering and system level, and some linkage at the model level, which must be in demand. In the field of machine learning systems, we have also begun to pay more and more attention, so there will be more and more talents in this area. I also want to lower the barriers to entry in this area, such as reducing rework through machine learning compilation. The need for engineering deployment has always been there, but now there is a greater emphasis. Our core body sense is that there is a strong demand for talent and aspects in this field, which is naturally a good thing for practitioners in machine learning and engineering deployments. We have also been thinking about how to achieve four or two thousand pounds through some automation technology.
Q11: Nvidia, AMD, and Apple all have their own architectures. What’s the difference?
Professor Chen: In terms of the whole ecology, relatively Nvidia is the best at present. AMD’s main problem is that the software is not good, and different factories put completely different efforts into the software. The entire process from hardware to model to application is built, from model to foundation to memory optimization, and even the relevant infrastructure for modeling in the first place. After a few years of hard work, we can now build a more compatible infrastructure based on compilation. We have also tried AMD before, such as the flagship game card 7900 XT, which can probably run to 80% of the state of 4090. We hope to solve more software problems through automation, we can now run more cards, such as two game cards can run 70B, equivalent to the maximum can run to LLaMa, can play a variety of hardware, or quite exciting.
Apple leads at the architectural level, especially the Ultra, which has a lot of memory. The machine directly runs LLaMA model, the easiest way is to buy a new Apple M2 notebook. Our solution is also available on Apple.
Software problems require everyone’s effort, and they can be solved with infrastructure. Of course, it has to be engineered. Our solution is not limited to the reasoning side, it just needs to open up some ideas. CXO UNION-CXO联盟(cxounion.cn)
Q12: Some of the work you do, like XGBoost, is very influential. How do you choose research questions?
Professor Chen: We are more concerned about how to integrate all the work together. The challenge of machine learning engineering is not to solve a specific problem, but to have 10 solutions, one for morphing, another for Sparsity, another for batching, and possibly others… This is why deep learning is growing particularly fast because of the high degree of modularity in software engineering.
Do not pay attention to what is the detection, because naturally the detection head and ResNet backbone can be connected to use. People who do optimization do not have to pay attention to what ResNet is, as long as the optimizer is written, ResNet can take over. Machine learning modeling is now modular and reusable, but machine learning engineering is not yet hot enough. So what we’ve been focusing on lately is designing an infrastructure to solve these kinds of problems.
Q13: Some people say that you are a creator of a whole new discipline of machine learning compilation, what do you think about this?
Professor Chen: This is not a new discipline, we have a lot of accumulation and investment in the compilation itself. My personal research method is problem-driven, and the idea at that time was to explore what methods should be used to solve the engineering problems of machine learning, so many hardware back-end, how to leverage this field with the least capacity? We feel that automation is the way to go, and compilation engineering is one of those paths. The definition of compilation itself is evolving, and our latest solutions combine manual and automated solutions to accelerate engineering iterations. CXO UNION-CXO联盟(cxounion.cn)
Q14: What are the prospects for tree-based models?
Professor Chen: Tree-based models are still essential technology solutions in many industries, and XGBoost is still in the top five tools for data scientists. Future technologies in every direction will require tree models, which will be very important in the foreseeable future, especially in tabular data, finance and other fields, and will be used a lot.
Q15: Did you face any particular challenges during your research?
Professor Chen: There are always challenges, and there is no standard answer in scientific research. Try or fail, industry or academia, there will be a variety of forward goals, are more voluminous (laughter)! But we enjoy being part of the process of pushing the field forward.
Q16: How do you judge that your research direction is a relatively correct path?
Professor Chen: I can’t judge, I just believe. Steve Jobs has a saying: The journey is the reward. Many times the goal is not the final trophy, but the process of walking.
As for whether one thing is right or not, for example, 12 years ago I began to do deep learning, which is a very right and wrong decision, the goal was to use deep learning algorithms to solve the problem of ImageNet, the result after two and a half years, there is no result, but the experience accumulated, can be used to do other content in the future. The key is to “do what’s fun”.
Q17: Do you consider yourself a very voluptuous person?
Professor Chen: There is no volume in doing interesting things, mainly enjoying the process of doing them (laughs). CXO UNION-CXO联盟(cxounion.cn)
Counsellor vitality
What do you think is technological vitality?
In the exploration of uncertainty, in the process of progress, you will find a lot of interesting surprises, which is the vitality of science and technology
Chen Tianqi, Professor, Department of Machine Learning and Department of Computer Science, Carnegie Mellon University
由CXO UNION-CXO联盟(cxounion.cn)转载而成,来源于绿洲资本 Vitalbridge;编辑/翻译:CXO UNIONCXO联盟小U。
如需加入CXO UNION(CXO联盟)高管社群,请联系社群小伙伴哦~

免责声明: 本网站(http://www.cxounion.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。
本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。
如需加入CXO UNION(CXO联盟)高管社群,请联系社群小伙伴哦~

免责声明: 本网站(http://www.cxounion.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。
本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。
Search
Popular Posts
-
2024数字化灯塔案例评选申报开启!
“2024数字化灯塔案例评选”于3月正式启动,诚挚欢迎业界同仁自荐和推荐,一起推动产业数字化进程,助力赋能企业…
-
2024 X-Award星盘奖申报通道已开启!
X-Award星盘奖是数字化转型服务、IT服务行业重要的商业奖项,旨在表彰行业里提供杰出数字化转型服务与IT服…
-
2024 N-Award星云奖申报通道已开启!
N-Award是数字化转型领域重要的商业奖项,旨在表彰那些以非凡的远见、超群的领导才能和卓越的成就来激励他人的…



