1979 & Proyouth GAMES to Linkin from 1951: Ed's & A!20s most curious moments as V. Neumann's & The Economist's diarists include 1982...LLM2022STORY why we co-brand with AIgoodmedia.com.When The Economist sent dad Norman Macrae to pre-train with Von Neumann 1951 Princeton, they agreed The Economist should start up leadership Entrepreneurial Revolution surveys; what goods will humans unite wherever they first linkedin to 100 times more tech per decade? Johnny added a final twist in notes for his biography. "Unfortunately Economics is Not Mathematical. One day only AI maths can save our species

Breaking: help prep AI rehearsal Fringe UNGA Sept 2023 NY- chris.macrae@yahoo.co.uk
July Guterres choosing top20 AIHLAB.. bard says Hassabis will chair this '''''with UN tech envoy ..members include Stanford's Fei-Fei Li , Allen's Etzioni, Sinovation's Kai Fu Lee,... Gemini,,Uni2 :FFL*JOBS*DH more G : 1 2 3 4 5
..worldclassllm & Royal Family's 150 year survey: can weekly newspaper help multiply trust around worldwide human development?
0: Around WorldMaths #1 FFL in 80.. 79

Game AI : Architect Intelligence:: EconomistDiary invites you to co-create this game & apply bard.solar ; personalise your pack of 52 players cards. Whose intelligence over last 75 years most connects human advancement at every gps concerning you and yours on planet?
we offer 3 types of tours sampling rockstars on intelligence for good and welcome guest tours :Alpha Chronological began 1951 through 4 decades at The Economist; Gamma: back from future of 2020s began 1984; Beta intergeneration connectors are more recent quests; try  AI game out; we'd love to hear whose action networks inspires You and who chris.macrae@yahoo.co.uk
Alpha1 JFKennedy Neumann-Einstein-Turing Crowther; Youth visions for 1960s launched by Kennedy as great as any known to us- eg space race; peace corps, Atlantic-Pacific win-win trade; Kennedy had studied quite traditional economic gurus at Harvard (eg ); served in US Navy Pacific theatre word war 2; he discovered The Economist stories of exciting economic possibilities; these had emerged from editor Geoffrey Crowther ; his 20+ years of editing included 1943 centenary autobiography of Economist- had been a mistake to vision a newspaper helping 20 something Queen Victoria in 1843 transform to commonwealth trading from slavemaking empire; Crowther thought good news media was worth another go; he sent a rookie journalised who had survived being teen navigator allied bomber command Burma to pretrain with Neumann at Princeton year of 1951 as well as interview NY-UN year 6; Neumann explained after spending their lives mainly on the science allies needed to beat Hitler: Neumann-Einstein-Turing wanted a good legacy - digitalisation -see eg Neumann's last lecture notes delivered Yale "Computer and the Brain". There were 4 inter-generational crises the NET foresaw; sorting out energy; designing win-win economics; sorting out worldwide cooperations; everything else UN and multilaterals were being asked to resolve. Neumann trained Economist journalist in the leadership survey : "What goods will humans unite wherever they have early access to 100 times more tech per decade?"
(breakingJy10) Gamma1 Hassabis , Fei-Fei Li,, Guterres, Oren Etzioni, JYKim, Ng, Yang, Chang, Chang- There are lots of alternative Gammas but we start with 2 engineers who transformed AI from 2010 when they furst met at Stanford and discussed FFL's NSF funding of imagenet since 2006; 2 public health servants who in 2016 weren't happy with just talking 17 new UN goals and have been asking AI genii to help digital roadmap UN2 since 2016 and a Taiwanese American in Silicon Valley, a Chinese American In Taiwan and Samsung's Korean who partnered Taiwan's chip making genii; these stories have lots of personal courage as well as brilliance; any reporting errors are mine alone chris.macrae@yahoo.co.uk My family has made 100 trips to Asia from the west but still have no fluency in oriental languages so I am biassed : i believe NOW! that LLMs can connect the best cooperation intelligences ever and urgently map life critical knowhow through every global villahge
Beta 1 celebrates massive web and inter-generational  gifts of Steve Jobs Fazle Abed Mr Sudo JYKim and Mr Grant; you will probably know Jobs started 2 digital networking revolutions with 1984s Mackintosh Personal Computer and apple and 2007's iphone; at bottom of pyramid, you may not know Asia-66-percent-of%20Intelligence-for-good-part-1.docx   fazle abed linked up to 1 billion tropical Asian real housewives & entrepreneurs towards  empowering the end of poverty; and Steve hosted silicon valleys 65th birthday party for abed in 2001; they brainstormed transformative education which the pc hadn't delivered ..but could the mobile era be visioned to do so?; Mr Sudo had partnered Abed and Bangladesh villagers in "leapfrog" mobile experiments starting 1995. By 2001, as Jobs was introducing Abed to eg Stanford friends, Kim had discovered Abed's women were networking the most effective solution to rural Tuberculosis; he introduced Gates and Soros to Abed as all 4 wanted 2000s Global Fund to end TB & HIV & Malaria; at the same time Guterres had moved from Portuguese prime minister to red cross and then UN servant leader of refugees; meanwhile back in 1980 it was UNICEF's James Grant who had discovered Fazle Abed women's oral rehydration network which was saving lives of 1 in 3 infants who previously died of diarrhea in the tropics' humid villages ; Grant became worldwide marketer of how parents could mix water sugar and salts as the life saving cure of ORD; naturally James Grant College of Global Public Health has become cornerstone of all the new university cooperations Abed and Jobs started brainstorming in 2001
here we discuss why 73 years as biographers of V Neumann's future visions suggests its critical to map intelligences who got us to 2020s and today's giant co-leapers Gamma-tours; this also opens door to which intelligences at national or other place levels contribute what? - see our 60+ years of intelligences, and eg discussion of why to end extreme poverty we need one open global university of poverty
Beta2 : NB how different scope of 2020s AI is from cross-selection of web2,1 engineers of last quarter century- NB valuetrue purpose of gamifying Architect Intel : borderless engineering can help humans vision 2020's co-creation of web3 and millennials development beyond extinction. Kai Fu Lee, Ng, Melinda Gates, Koike, Lela Ibrahim, Jobs, Satoshi ,Houlin Zhao, Allen, Musk, Brin ,Page , Bezos, Ma, Zhengfei, Torvaulds, Berners Lee, Masa Son, It would be a pity if short-term nationalism stopped us 8 billion humans learning from these tireless innovative beings. Do sub in your regional counterpart. Also note what no conventional strategist saw as Intelligence possible before 2017. To clarify: start with kai fu lee- his best seller on AI in 2017 doesn't explain the ai thats changing every possibiliity of the 2020s but does it good job of AI up to 2017. He also has unique view because he was sent by google to explore china, falling ill at same time as google exiting china, writing up ai that inspired reinventing himself as both venture capitalist in the midst of asia's most extraordinary student suburb (Zhong...) and as curious observer. I see Ng, Ms Gates. Koike, Ibrahim -as civil education heroines/heroes - who are yours ? Satoshi, Zhao, Allen, Musk - gamechangers taking on conflicts that journey us all through tipping points. One day the world may decide it was a blessing that a corporate like google and a revolutionary uni like Stanford co-habited the same 100 square miles- is there any other comparable 100 square miles of brainworkers for humanity. (I love Hong Kong but thats its own story). The other 5 kept digital movements alive -they merit being valued as engineering heroes before you decide how to translate systemic components to your regions' -and mother earth's - urgent needs.

Monday, July 31, 2023

Johnny said : logic proofs of mathematicians put their hypotheses at the top; economists don't. Qed Economics is not mathematical. When we design digital futures, be careful and open in choosing human intelligences.

 Diaries grounding our future history genre of human intelligences began 1951 with John Von Neumann and The Economist; from 1984 we transferred this to books in genre 2025 report (Science Editor Vicout Ridley's review)  - should we be looking for another publisher or hwo would you like to palyAI Game 1 Architect of Intelligence?

Here's summary context - please tell me if its obvious who would be most appropriate to assess this inquiry

Sincerely Chris Macrae + 1 240 316 8157 Norman Macrae Foundation Bard.solar

Back in 1951, Editor Geoffrey Crowther seconded my father Norman Macrae to Princeton and New York for a year- dad became the biographer/journalist of futures inspired by von Neumann's networks as well as those of Keynes, Crowther, Wilson, Smith and other friends of The Economist vision

In addition to dad's 40 years of work at The Economist, from early 1980s I helped co-author future histories (Genre 2025 Report :  Matt Ridley's review) and research von Neumann biography

A while ago I was chatting to von Neumann's daughter Marina and we started asking/mapping if Von Neumann or dad and pst friends were alive today, whose intelligence/vision for human advancement would they value most 2025-1950.

Its my hypothesis, and that of maths friends in NY who want to help UN sustainability Goals to advance ,that only Architects of Intel can return systems to sustainability of generations. So our proposal is Gamify Architects of Intelligence by curating first tours of inspirational human intelligences as well as inviting people to substitute players for their own fantasy league segmentations. By this means, we can counter noise/politics spun by fear of AI

Ultimately extinction would be the world's biggest maths error, but that's no reason to passively let that happen/ We are talking about mapping human intelligence designs/actions in deep communities as well as through tech platforms


We have tested chapters of 3 types;
 Alpha is chronological so eg Crowther Neumann Einstein Turing Kennedy -what would the world uniquely miss without linking  these visionary agents of intergenerational good

Route Gamma is back from 2025's leading AI for good eg Hassabis Fei-Fei Li, Pichai, Leila Ibrahim

Route Beta hops in between 2025-1951 to extraordinary meetings of mind- when dad died The Japan Ambassador to Bangladesh arranged for me to follow up what happened when Steve Jobs in 2001 Silicon Valley hosted 65th birthday party of Fazle Abed who converted from being Royal Dutch Shell regional ceo to 50 years of empowering bottom of pyramid 1 billion women (this most unusual innovation took 16 trips to Bangladesh to fully catalogue  abedmooc.com  during the 5 years leading up to the sdgs launch 2015 and the 5 that followed

We believe gamifying Architect Intelligence can help multilateral leaders like Guterres end green washing; between my father and I. we traveled to Asia 100 times so representing each hemispheres priority future intels is designed into version 0 of this game.  So for example. regarding Asia's 60+% of beings, I hope we may yet make progress on where James Wilson left off back in 1860

Sunday, July 30, 2023

2017 golden oldie- Fei-fei li countdown to 2030 sdgs world - 13 and counting

 Au revoir imagenet  and china ai hello google cloudsters see also friends20.com aifordsgs.com upd aug 26 2023 - apply to be in UN world partner guterres last intelligence advisory

Fei-Fei Li: Artificial Intelligence is on its way to reshape the world
Research Interest Score




Recommend this work
Get updates
Share in a message
Related research
Full-text available
Fei-Fei Li, a well-known scientist focusing on computer vision and Artificial Intelligence (AI), did not expect such zeal in China about AI. During her last visit to Beijing, the professor of Stanford University drew much attention from both academy and industry here; NSR took the opportunity to interview Professor Li. She points out that, although Neural Network has made marvelous advances in the past 15–20 years, there are still enormous challenges ahead. On the one hand, computational models for AI, such as many current deep neural networks, has theoretical bottleneck to resolve, such as interpretability and explanability; On the other hand, AI should offer more in solving societal problems and in accelerating innovation in industries such as healthcare, traffic contral and agriculture. This would be a more practical way to realize the potential and speed up the advancement of AI. Moreover, Prof. Li is interested but cautious about Artificial General Intelligence (AGI).
Page 1
Type: InterviewAuthor’s E-mail: gaoyuan@scichina.orgFei-Fei Li: Artificial Intelligence is on its way to reshape theworldBy Yi Zeng and Ling WangFei-Fei Li, a well-known scientist focusing on computer vision and Artificial Intelligence (AI), didnot expect such zeal in China about AI. During her last visit to Beijing, the professor of StanfordUniversity drew much attention from both academy and industry here; NSR took the opportunityto interview Professor Li. She points out that, although Neural Network has made marvelousadvances in the past 15-20 years, there are still enormous challenges ahead. On the one hand,computational models for AI, such as many current deep neural networks, has theoreticalbottleneck to resolve, such as interpretability and explanability; On the other hand, AI should offermore in solving societal problems and in accelerating innovation in industries such as healthcare,traffic contral and agriculture. This would be a more practical way to realize the potential andspeed up the advancement of AI. Moreover, Prof. Li is interested but cautious about ArtificialGeneral Intelligence (AGI).FROM STANFORD TO GOOGLENSR: We learned that you joined Google earlier this year, could you provide us with moredetails?Li: Actually this does not mean I am leaving the academic community. I will be on sabbatical,working as Chief Scientist of AI/ML of Google Cloud, untill the second half of 2018. And duringthis time, I will continue to work with my graduate students, postdoc fellows and collaborators atStanford University.NSR: Why did you choose Google Cloud? Will there be overlap in research topics between yourlab and the company?Li: If we look back at the fast development of AI during the past 20 years, especially the threesubfields of AI: Machine Learning (ML), Natural Language Processing (NLP) and Computer Vision(CV), we would see that Web-based data is a very important driving force to make AI strongerand stronger. So, what is the next step for AI? In my point of view, it is time for AI to help othervertical industries like healthcare, agriculture and manufacture to transform and upgrade. GoogleCloud is an excellent platform that will accelerate this process, which has both scientific andcommercial significance.NSR: What will you do at Google Cloud?Li: I will assemble a team with versatile talents to improve the AI and ML performance ofGoogle Cloud and collaborate with commercial department to facilitate new products. We hope
Page 2
to have more interactions with the counterparts in academy communities and welcome them towork at Google Cloud.NSR: It is obvious that the rise of the computing capability has enabled recent advancement ofsome AI models, including Deep Neural Networks. What else do you think that could be broughtto AI by advancing computing infrastructure?Li: Computing capability could not only affect the speed of computation but also could affectthe model structure of AI model. For example, the graphical model of ML had been very popularin 1990s. Due to the limit of computing power, experts hand-designed many features to reducethe complexity and time cost. When more powerful computing infrastructure arose, we realizedthat hand-design methods had missed the opportunities. With the powerful computationcapability, more complex and efficient algorithms could be inspired and applied.NSR: Another related question is, whether the larger the computation scale is, the better itwould perform? For example, to realize the cognitive functions of the human brain and interpretthe nature of human intelligence, should we set up a model consisting of the same order ofneurons of our brain?Li: That is a tough question, but scale does really matters. The Chinese saying “Quantity BreedsQuality” is appropriate for describing machine learning model, I think.NSR: Someone says that many AI scientists are doing the same things as statisticians. While AIscientists build models with 1,000,000 parameters, statisticians build models with 100parameters to solve the problems. What do you think?Li: I am afraid I can’t agree with this idea. In my point of view, statistics algorithm and AI are notstanding at the opposite side, instead, they are complementary and perhaps AI is a continuationof statistics. If we could solve the problem with a model with 100 rather than 1,000,000parameters, that would be great. To be clear, the 1,000,000 parameters of AI model have theirmeaning, we should not only support statistical research but also interpretation of AI model.DARPA (Defense Advanced Research Projects Agency) initiated a project called explainable AI,aiming to fathom the black box of AI model.POTENTIAL PATHS TO INTERPRET AINSR: Many neural network models lack interpretability, especially for the hidden layers in deepneural networks. Do you think brain research would inspire and improve interpretations of thesenetworks?Li: Although neural network of AI is quite different from that of the human brain, it had indeedbought the concept from neuroscience. We are far from understanding our brain, and we are inthe same awkward situation with neural network in AI. It is possible that breakthrough inneuroscience would stimulate the AI interpretation, and cognitive science is another propeller toaccelerate the process.NSR: In fact, close cooperation of AI scientists and neural scientists is an emerging phenomenonin China; the CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT) is
Page 3
the best example. The Center selects and recruits leading scientists to tackle the problems in theinterdisciplinary area of brain science and brain-inspired AI.Li: That is great! This is probably not that obvious in the US. I hope there would be morecross-disciplinary cooperation all over the world.NSR: It has been more than 60 years since AI was emerged as a field of research. And there areups and downs of its developments. Neural networks have become very popular these days,what about other branches of AI such as symbolism, knowledge representation and reasoning,which were major subfield of AI? Should we integrate the efforts of knowledge engineering andneural networks?Li: neural network has made huge success in solving the problems otherwise imaginablyimpossible. AlphaGos victory over the human master-hand in the Go game is the latestconvincing proof. Symbolism had its peak in the last century, but seems less popular these days.To be honest, knowledge representation is perplexed problem for me. Why we humans formlanguage which is closely related with symbols? Is it a second choice or with intrinsic advantages?There are indeed some preliminary works to hybrid symbolic (conditional random field) withneural network. But I think the most practical way for AI to leap forward is to break the bottleneck of lacking interpretability, architectural knowledge and training flexibility of neuralnetworks.IS AGI AN ILLUSION?NSR: It is wired that Artificial General Intelligence (AGI) has been chased after in industrycommunity but currently seems not well accepted in academic community. Someone says it isthe ultimate goal of AI. What do you think?Li: I suspect that the propaganda of AGI is motivated for commercial interest by people who haveno idea about what it really means. For example, what does it mean we will are capable of doingmathematics? Prove Fermat's theorem or what else? Does an unmanned vehicle have AGI? It hasmultiple sensors and functions to assume the driving load as humans if not better. Or does arobot sent to the Mars to build houses has AGI? I don’t think there is clear definition of AGI.NSR: From my point of view (Yi Zeng),AGI can be interpreted from the perspective ofautonomously deciding the types of problems and coordinating all the cognitive functions thathuman being have to solve very different complex tasks.Li: I don’t think it is AGI, if an AGI agent is approximately equal to a human; it has nothing to dowith AI. Humans are not just universally capable beings, we have love, emotion, empathy; theseare qualities that do not seem to be included in AGI. If we need to define AGI, maybe it is anagent capable of multi-knowledge presentation, multi-sensory, multi-layer reasoning, andlearning.NSR: As human beings, many of us regard ourselves on the top of the biological evolution, andmany may feel scared for being surpassed by machines in the areas we are good at.Li: Cars run faster than us, cranes hold heavier objects, it is not necessary to be afraid. We have
Page 4
emotion, which is unique.NSR: There is a line of research on robot and machine consciousness. What do you think aboutcreating a system that has consciousness and emotion as us?Li: It is more like a philosophical problem. We are carbon-based structures, but at what level wefinally get consciousness and emotion? It is difficult to answer.NSR: A recently published paper from Institute of Neuroscience, Chinese Academy of Sciencesindicates that, trained monkeys showed self-consciousness through recognizing its image in themirror, which challenges the traditional belief that monkeys don’t have self-consciousness.Li: Did they make neuron-correlation experiments? MRI is probably too coarse to prove this.NSR: Yes, they are doing further experiments to support the findings.GREAT PASSION PAYS OFFNSR: ImageNet is a flagship project in Computer Vision research; could you describe what wasthe motivation and why you started such a project?Li: When I began to build ImageNet, there was little funding for me doing this. But I did not giveup. I did enjoy the research process, and that is enough. Fortunately, six years later, ImageNetproves its value through a milestone paper which heralds a new spring for AI, especially forneural network. Comparing with the Geoffrey Hinton, I feel so lucky, for he had stuck to theresearch field for more than 20 years to be paid off, and I had not waited that long. And I admirehis passion very much.NSR: Passion is very important for scientific research, what about choosing the direction?Li: Based on statistics, there is no way that would guarantee success, and it is especially true inresearch. Failure, if not unavoidable, is very likely. So my experience is choosing what I have greatpassion and enjoy the process. If you feel exhausted and regretful, that would not deserve yourtime.NSR: Besides passion, you also need courage.Li: Indeed. If you want to pursue science you must pursue freedom and truth. Free to chooseyour direction seems positive, but it is scary, for there is no codified experience to draw upon.You need to rely on yourself and bear the risk of failure.NSR: Ground-breaking work in fundamental science has been called for in China for a long time.But the outcome is not that satisfying. What is your suggestion?Li: There are so many smart brains in China, and research funding is increasing steadily. I think mycounterparts here have better chances to do original works now. Perhaps tolerance for failureshould be a concern for the whole society, which means scientists could have freedom to dowhat they have passion in. It is known that tenure track was invented to protect the curiosity ofscientists and support exploration in unreached area. And I heard that leading universities inChina have gradually adopted the system, it is a good sign.
Page 5
As we mentioned earlier, “Quantity Breeds Quality”, incremental research is also important. TheDeep Residual Networks (ResNets) proposed by Kaiming He, currently the prevalent architecturein computer vision, has shown great potential in areas like NLP, speech recognition. I hope therewill be more and more Kaiming He in China.INTO THE FUTURE OF AINSR: AI has been hyped these days, and once a startup brags itself an AI company, it attractsinvestment more easily, thus brings in more talents. How would this tide affect fundamentalresearch in AI?Li: It is an interesting time for AI, to say the least. AI is a real deal, but also heavily hyped bycommunication or presentation without care and rigor. I firmly believe AI is as real as computing,the Internet, renewable energy, new materials, etc. But indeed the amount of frivolous talks of AIis also intense right now. It impacts everyone, from entrepreneurs, to investors, from bigcompanies to governments, from funding agencies to basic research institutions. Many AIresearchers have repeatedly called out for a balanced discussion of AI at both academic andpublic forums. We will continue to do this. The truth is, at this moment, this is the time to doubledown on basic, fundamental research in AI. It’s time to support and fund long-term research thataddress many of the most difficult and yet to be solved problems of AI.NSR: What do you think are the most important challenges in AI in future years?Li: There is a long list of challenges. For machine learning, the most critical problem is learning tolearn, the machine learning model applied to solve problems are hand-designed architectures,unless we interpret the work mechanism, it is hard to reach the target. For NLP, we still stay at arelatively superficial level, lacking deep interactive dialogue. For Computer Vision, althoughproducts based on it are emerging, we are far from fulfilling our aim. If computer could observethe world as we do, and make correct judgment, that will substantially change our lives.Yi Zeng is a professor of the Institute of Automation, Chinese Academy of Sciences, and LingWang is a science news reporter based in Beijing.
Page 6
[Insert Figure 1 at top of the right column on first page]Fei-Fei Li, Director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab. (Courtesyof Fei-Fei Li)[Insert pull quote at top of the right column on second page]It is possible that breakthrough in neuroscience would stimulate the AI interpretation, andcognitive science is another propeller to accelerate the process.Fei-Fei Li”[Insert pull quote at top of the left column on last page]Perhaps tolerance for failure should be a concern for the whole society, which means scientistscould have freedom to do what they have passion in.Fei-Fei Li”
Similar research
Classical conditioning plays a critical role in the learning process of biological brains, and many computational models have been built to reproduce the related classical experiments. However, these models can reproduce and explain only a limited range of typical phenomena in classical conditioning. Based on existing biological findings concerning classical conditioning, we build a brain-inspired classical conditioning (BICC) model. Compared with other computational models, our BICC model can reproduce as many as 15 classical experiments, explaining a broader set of findings than other models have, and offers better computational explainability for both the experimental phenomena and the biological mechanisms of classical conditioning. Finally, we validate our theoretical model on a humanoid robot in three classical conditioning experiments (acquisition, extinction, and reacquisition) and a speed generalization experiment, and the results show that our model is computationally feasible as a foundation for brain-inspired robot classical conditioning.
Artificial Intelligence principles define social and ethical considerations to develop future AI. They come from research institutes, government organizations and industries. All versions of AI principles are with different considerations covering different perspectives and making different emphasis. None of them can be considered as complete and can cover the rest AI principle proposals. Here we introduce LAIP, an effort and platform for linking and analyzing different Artificial Intelligence Principles. We want to explicitly establish the common topics and links among AI Principles proposed by different organizations and investigate on their uniqueness. Based on these efforts, for the long-term future of AI, instead of directly adopting any of the AI principles, we argue for the necessity of incorporating various AI Principles into a comprehensive framework and focusing on how they can interact and complete each other.
Compared to computer vision systems, the human visual system is more fast and accurate. It is well accepted that V1 neurons can well encode contour information. There are plenty of computational models about contour detection based on the mechanism of the V1 neurons. Multiple-cue inhibition operator is one well-known model, which is based on the mechanism of V1 neurons' non-classical receptive fields. However, this model is time-consuming and noisy. To solve these two problems, we propose an improved model which integrates some additional other mechanisms of the primary vision system. Firstly, based on the knowledge that the salient contours only occupy a small portion of the whole image, the prior filtering is introduced to decrease the running time. Secondly, based on the physiological finding that nearby neurons often have highly correlated responses and thus include redundant information, we adopt the uniform samplings to speed up the algorithm. Thirdly, sparse coding is introduced to suppress the unwanted noises. Finally, to validate the performance, we test it on Berkeley Segmentation Data Set. The results show that the improved model can decrease running time as well as keep the accuracy of the contour detection.
When learning concepts, cognitive psychology research has revealed that there are two types of concept representations in the human brain: language-derived codes and sensory-derived codes. For the objective of human-like artificial intelligence, we expect to provide multisensory and text-derived representations for concepts in AI systems. Psychologists and computer scientists have published lots of datasets for the two kinds of representations, but as far as we know, no systematic work exits to analyze them together. We do a statistical study on them in this work. We want to know if multisensory vectors and text-derived vectors reflect conceptual understanding and if they are complementary in terms of cognition. Four experiments are presented in this work, all focused on multisensory representations labeled by psychologists and text-derived representations generated by computer scientists for concept learning, and the results demonstrate that (1) for the same concept, both forms of representations can properly reflect the concept, but (2) the representational similarity analysis findings reveal that the two types of representations are significantly different, (3) as the concreteness of the concept grows larger, the multisensory representation of the concept becomes closer to human beings than the text-derived representation, and (4) we verified that combining the two improves the concept representation.
In the field of computer vision, active vision systems based on Artificial General Intelligence have shown stronger adaptability compared to passive vision systems based on Special-purpose Artificial Intelligence when dealing with complex and general scenarios. The development of visual capabilities in intelligent agents follows stages similar to human infants, gradually progressing over time. It is also important to set appropriate goal systems for educational guidance. This paper presents preliminary research on the developmental overview of visual depth perception abilities using OpenNARS, an Artificial General Intelligence system equipped with visual perception and motion channels. This study reveals that the development of visual abilities is not based on a mathematical algorithm but rather an educational guidance of embodied experiences construction.