card đồ họa

Từ bách khoa toàn thư miễn phí Wikipedia Chuyển đến điều hướng Bỏ qua để tìm kiếm

Một     card đồ họa     (còn gọi là     một card màn hình    ,     card màn hình    ,     card đồ họa    ,     card đồ họa,    hoặc     vỉ màn hình    ) là một card mở rộng sản xuất đầu ra video từ một thiết bị hiển thị (ví dụ như một màn hình máy tính). Chúng thường được quảng cáo là card đồ họa rời hoặc chuyên dụng, nhấn mạnh sự khác biệt giữa chúng và đồ họa tích hợp. Trung tâm của cả hai là bộ xử lý đồ họa (GPU), là thành phần chính thực hiện các phép tính, nhưng không nên nhầm lẫn nó với card đồ họa nói chung, mặc dù “GPU” thường được sử dụng như một từ viết tắt từ hoán dụ để biểu thị cạc đồ họa.

Hầu hết các card đồ họa không bị giới hạn ở một đầu ra màn hình đơn giản. Bộ xử lý đồ họa tích hợp có thể thực hiện xử lý bổ sung bằng cách xóa tác vụ này khỏi CPU của máy tính.    [1]     Ví dụ: Nvidia và AMD (trước đây là ATI) tạo bản đồ hiển thị đường ống đồ họa OpenGL và DirectX cấp phần cứng.    [2] Vào     cuối những năm 2010, cũng có xu hướng sử dụng sức mạnh tính toán của bộ xử lý đồ họa để giải quyết các tác vụ phi đồ họa có thể được thực hiện bằng OpenCL và CUDA. Thẻ đồ họa được sử dụng rộng rãi để đào tạo AI, khai thác tiền điện tử và mô phỏng phân tử.    [2]    [3]    [4]

Card đồ họa     thường được làm dưới dạng bảng mạch in (mô-đun mở rộng) và được lắp vào khe cắm mở rộng chung hoặc (AGP, PCI Express) đặc biệt. [5]     Một số được làm bằng cách sử dụng hộp đặc biệt kết nối với máy tính qua đế cắm hoặc cáp. Chúng được gọi là eGPU. ngày [sửa]

Các tiêu chuẩn như MDA, CGA, HGC, Tandy, PGC, EGA, VGA, MCGA, 8514 hoặc XGA được giới thiệu từ năm 1982 đến năm 1990 và được hỗ trợ bởi các nhà sản xuất phần cứng khác nhau.

3dfx Interactive là một trong những công ty đầu tiên phát triển GPU có khả năng tăng tốc 3D (thông qua dòng Vodoo) và một chipset đồ họa dành riêng cho 3D nhưng không hỗ trợ 2D (do đó yêu cầu một card làm việc 2D). Hầu hết các card đồ họa hiện đại hiện nay đều được sản xuất bằng chip đồ họa của AMD hoặc Nvidia.    [6] Đến     năm 2000, 3dfx Interactive cũng là một nhà sản xuất quan trọng và thường xuyên đổi mới. Hầu hết các card đồ họa đều cung cấp nhiều tính năng khác nhau như cảnh 3D tăng tốc và đồ họa 2D, giải mã MPEG-2 / MPEG-4, TV-out hoặc khả năng kết nối nhiều màn hình (multi-monitor). Card đồ họa có card âm thanh để xuất âm thanh cũng như video cho TV hoặc màn hình được kết nối với loa tích hợp.

Trong ngành công nghiệp, card đồ họa đôi khi được gọi là đồ họa   thẻ   và viết tắt là  AIB  s     [6]     và từ “đồ họa” thường không được sử dụng.    Đồ họa rời và tích hợp [sửa]3dp chip la gi - 3dp chip la gi    Kiến trúc máy tính để bàn cổ điển với card đồ họa rời qua PCI Express. Các dải tần số điển hình cho một số công nghệ bộ nhớ nhất định thiếu độ trễ của bộ nhớ. Cả hai bộ nhớ vật lý khác nhau   vì không thể   sao chép giữa GPU và CPU       . Dữ liệu phải được sao chép từ dữ liệu này sang dữ liệu khác trước khi có thể được chia sẻ. Đồ họa tích hợp với    bộ nhớ chính được chia nhỏ3dp chip la gi - 3dp chip la gi: Một số bộ nhớ hệ thống được dành riêng cho GPU. Không thể sao chép bằng 0, dữ liệu phải được sao chép từ phân vùng này sang phân vùng khác thông qua bus bộ nhớ hệ thống. Đồ họa tích hợp với  bộ nhớ chính thống nhất  có sẵn trên AMD “Kaveri” hoặc PlayStation 4 (HSA)   3dp chip la gi - 3dp chip la gi.

Để thay thế cho việc sử dụng card đồ họa, phần cứng video có thể được tích hợp vào bo mạch chủ, bộ xử lý hoặc hệ thống trên một con chip. Cả hai cách tiếp cận đều có thể được gọi là đồ họa tích hợp. Chuyển đổi dựa trên bo mạch chủ đôi khi được gọi là “video nhúng”. Hầu hết tất cả các bo mạch chủ máy tính để bàn đồ họa tích hợp đều cho phép bạn vô hiệu hóa chip đồ họa tích hợp trong BIOS và có khe cắm PCI hoặc PCI Express (PCI-E) để thêm một card đồ họa hiệu suất cao hơn thay vì đồ họa tích hợp. Khả năng vô hiệu hóa đồ họa tích hợp đôi khi cũng cho phép sử dụng lâu dài bo mạch chủ mà video nhúng không thành công. Đôi khi, một card đồ họa rời (đôi khi được gọi là chuyên dụng) có thể được sử dụng, chẳng hạn như đồ họa tích hợp và để cấp nguồn cho các màn hình riêng biệt đồng thời. Những ưu điểm chính của đồ họa tích hợp bao gồm giá cả, sự nhỏ gọn,đơn giản và tiêu thụ điện năng thấp. Nhược điểm của hiệu suất đồ họa tích hợp xảy ra do GPU chia sẻ tài nguyên hệ thống với CPU. Card đồ họa rời có bộ nhớ truy cập ngẫu nhiên (RAM) riêng, hệ thống làm mát riêng và bộ điều khiển nguồn đặc biệt, tất cả các thành phần đều được thiết kế đặc biệt để xử lý video. Nâng cấp lên card đồ họa rời sẽ giải phóng CPU và RAM hệ thống, nhờ đó không chỉ xử lý đồ họa nhanh hơn mà hiệu suất tổng thể của máy tính cũng được cải thiện đáng kể. Điều này thường cần thiết để chơi trò chơi điện tử, làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Card đồ họa rời có bộ nhớ truy cập ngẫu nhiên (RAM) riêng, hệ thống làm mát riêng và bộ điều khiển công suất đặc biệt, tất cả các thành phần được thiết kế đặc biệt để xử lý hình ảnh video.Nâng cấp lên card đồ họa rời giúp giải phóng CPU và RAM hệ thống, nhờ đó không chỉ xử lý đồ họa nhanh hơn mà hiệu suất tổng thể của máy tính cũng được cải thiện đáng kể. Điều này thường cần thiết để chơi trò chơi điện tử, làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Card đồ họa rời có bộ nhớ truy cập ngẫu nhiên (RAM) riêng, hệ thống làm mát riêng và bộ điều khiển công suất đặc biệt, tất cả các thành phần đều được thiết kế đặc biệt để xử lý hình ảnh video. Nâng cấp lên một card đồ họa rời, nó sẽ giải phóng CPU và RAM hệ thống, do đó không chỉ xử lý đồ họa nhanh hơn mà hiệu suất tổng thể của máy tính của bạn sẽ được cải thiện đáng kể. Điều này thường được yêu cầu để chơi trò chơi điện tử, làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Đây thường là trò chơi điện tử, Bắt buộc để làm việc với hoạt ảnh 3D hoặc chỉnh sửa video.Điều này thường được yêu cầu để chơi trò chơi điện tử, làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Làm việc với hoạt ảnh .3D hoặc chỉnh sửa video. Điều này thường cần thiết để chơi trò chơi điện tử, làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Làm việc với hoạt hình 3D hoặc chỉnh sửa video. Điều này thường được yêu cầu để chơi trò chơi điện tử, làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Yêu cầu để làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Yêu cầu để làm việc với hoạt ảnh 3D hoặc chỉnh sửa video.Cần thiết để làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Yêu cầu để làm việc với hoạt ảnh 3D hoặc chỉnh sửa video.Cần thiết để làm việc với hoạt ảnh 3D hoặc chỉnh sửa video. Yêu cầu để làm việc với hoạt ảnh 3D hoặc chỉnh sửa video.

Cả AMD và Intel đều giới thiệu bộ vi xử lý và chipset trên bo mạch chủ hỗ trợ tích hợp GPU trong cùng một ma trận với bộ xử lý. AMD đưa ra thị trường các bộ xử lý đồ họa tích hợp dưới nhãn hiệu Bộ xử lý tăng tốc (APU), trong khi Intel cung cấp các công nghệ tương tự với nhãn hiệu Intel HD Graphics và Iris. Với bộ vi xử lý thế hệ thứ 8, Intel đã công bố đồ họa tích hợp Intel UHD series để hỗ trợ tốt hơn cho màn hình 4K.    [7]     Mặc dù chưa tương đương với hiệu suất của các giải pháp rời, nhưng nền tảng Đồ họa HD của Intel cung cấp hiệu suất gần bằng đồ họa rời tầm trung và công nghệ AMD APU đã được cả máy chơi trò chơi điện tử PlayStation 4 và Xbox One áp dụng. .    [8]    [9]    [10]   Yêu cầu về nguồn điện [sửa]

Khi sức mạnh xử lý của card đồ họa tăng lên, nhu cầu sử dụng điện cũng tăng theo. Các card đồ họa hiệu suất cao hiện nay có xu hướng tiêu thụ lượng điện năng lớn. Ví dụ, công suất thiết kế nhiệt (TDP) cho GeForce Titan RTX là 280 watt.   [11] Khi     được thử nghiệm trong trò chơi, GeForce RTX 2080 Ti Founder’s Edition có mức tiêu thụ điện năng trung bình là 300 watt.   [12]     Yêu cầu về điện năng của GPU tiếp tục tăng do các nhà sản xuất bộ xử lý và nguồn gần đây đã hướng tới hiệu suất cao hơn, do đó, cạc đồ họa có thể có mức tiêu thụ điện năng cao nhất so với bất kỳ bộ phận nào của máy tính.   [13]    [14] Mặc dù  các nguồn cung cấp điện cũng ngày càng tăng công suất, điểm nghẽn đến từ kết nối PCI-Express, vốn được giới hạn ở nguồn điện 75 watt.    [15]    Các cạc đồ họa hiện đại có mức tiêu thụ điện hơn 75 watt thường bao gồm kết hợp giắc cắm sáu chân (75 W) hoặc tám chân (150 W) kết nối trực tiếp với nguồn điện. Đảm bảo làm mát đầy đủ trở thành một thách thức đối với những máy tính như vậy. Máy tính có nhiều card đồ họa có thể yêu cầu công suất hơn 750 watt. Tách nhiệt trở thành một cân nhắc quan trọng khi thiết kế máy tính với hai hoặc nhiều card đồ họa cao cấp.    kích thước [sửa]

Графичните карти за настолни компютри се предлагат в един от двата размера на профила, които могат да позволят добавянето на графична карта дори към малки компютри. Някои графични карти не са с обичайния размер и поради това са категоризирани като нископрофилни. [16] [17] Профилите на графичните карти се основават само на височина, като нископрофилните карти заемат по-малко от височината на PCIe слот, някои могат да бъдат толкова ниски като „половина височина“. необходим е цитат ]  Дължината и дебелината могат да варират значително, като картите от висок клас обикновено заемат два или три слота за разширение, а с двойни GPU карти – като Nvidia GeForce GTX 690 – обикновено надвишават 250 mm (10 in) дължина. [18] Като цяло повечето потребители ще предпочетат карта с по-нисък профил, ако намерението е да поставят няколко карти или се сблъскат с проблеми с изчистването с други компоненти на дънната платка като DIMM или PCIE слотове. Това може да се поправи с по-голям калъф, който се предлага в размери като средна кула и пълна кула. Пълните кули обикновено могат да паснат на по-големи дънни платки с размери като ATX и micro ATX. Колкото по-голям е корпусът, толкова по-голяма е дънната платка, толкова по-голяма е графичната карта или множество други компоненти, които ще придобият недвижимо имущество. Мащабиране на няколко карти[редактиране]

Some graphics cards can be linked together to allow scaling of the graphics processing across multiple cards. This is done using either the PCIe bus on the motherboard or, more commonly, a data bridge. Generally, the cards must be of the same model to be linked, and most low power cards are not able to be linked in this way.[19] AMD and Nvidia both have proprietary methods of scaling, CrossFireX for AMD, and SLI (since the Turing generation, superseded by NVLink) for Nvidia. Cards from different chipset manufacturers or architectures cannot be used together for multi-card scaling. If a graphics card has different sizes of memory, the lowest value will be used, with the higher values being disregarded. Currently, scaling on consumer-grade cards can be done using up to four cards.[20][21][22] The use of four cards requires a large motherboard with a proper configuration. Nvidia’s GeForce GTX 590 graphics card can be configured in this four-card configuration.[23] As stated above, users will want to stick to the same performance card for optimal use. Motherboards like ASUS Maximus 3 Extreme and Gigabyte GA EX58 Extreme are certified to work with this configuration.[24] A certificated large power supply is necessary to run the cards in SLI or CrossFireX. Power demands must be known before a proper supply is installed. For the four card configuration, a 1000+ watt supply is needed. AcBel PC8055-000G and Corsair AX1200 supplies are examples.[24] With any relatively powerful graphics card, thermal management can not be overlooked. Graphics cards require a well-vented chassis and thermal solution. Air or water cooling are usually required, though low power GPUs can use passive cooling, larger configurations use water solutions or immersion cooling to achieve proper performance without thermal throttling.[25]

SLI and Crossfire, are increasingly uncommon, as most games do not fully utilize multiple GPUs, as most users cannot afford them.[26][27][28] Multiple GPUs are still used on supercomputers (like in Summit), on workstations to accelerate video[29][30][31] and 3D rendering,[32][33][34][35][36] for VFX[37][38] and for simulations,[39] and in AI to expedite training, as is the case with Nvidia’s lineup of DGX workstations and servers.3D graphics APIs[edit]

A graphics driver usually supports one or multiple cards by the same vendor and has to be specifically written for an operating system. Additionally, the operating system or an extra software package may provide certain programming APIs for applications to perform 3D rendering.

OS Vulkan DirectX GNMX Metal OpenGL OpenGL ES
Windows Yes Microsoft No No Yes Yes
macOS MoltenVK No No Apple Apple No
Linux Yes Wine No No Yes Yes
Android Yes No No No Nvidia Yes
iOS MoltenVK No No Apple No Apple
Tizen In development No No No No Yes
Sailfish OS In development No No No No Yes
Xbox No Yes No No No No
Orbis OS (PlayStation) No No Yes No No No
Wii U Yes No No No Yes Yes

Usage specific GPU[edit]

Some GPUs are designed with specific usage in mind:

  1. Gaming
    • GeForce GTX
    • GeForce RTX
    • Nvidia Titan
    • Radeon HD
    • Radeon RX
  2. Cloud gaming
    • Nvidia Grid
    • Radeon Sky
  3. Workstation
    • Nvidia Quadro
    • AMD FirePro
    • Radeon Pro
  4. Cloud Workstation
    • Nvidia Tesla
    • AMD FireStream
  5. Artificial Intelligence Cloud
    • Nvidia Tesla
    • Radeon Instinct
  6. Automated/Driverless car
    • Nvidia Drive PX

Industry[edit]

As of 2016, the primary suppliers of the GPUs (graphics chips or chipsets) used in graphics cards are AMD and Nvidia. In the third quarter of 2013, AMD had a 35.5% market share while Nvidia had a 64.5% market share,[40] according to Jon Peddie Research. In economics, this industry structure is termed a duopoly. AMD and Nvidia also build and sell graphics cards, which are termed graphics add-in-board (AIBs) in the industry. (See Comparison of Nvidia graphics processing units and Comparison of AMD graphics processing units.) In addition to marketing their own graphics cards, AMD and Nvidia sell their GPUs to authorized AIB suppliers, which AMD and Nvidia refer to as “partners”.[6] The fact that Nvidia and AMD compete directly with their customer/partners complicates relationships in the industry. The fact that AMD and Intel are direct competitors in the CPU industry is also noteworthy, since AMD-based graphics cards may be used in computers with Intel CPUs. Intel’s move to APUs may weaken AMD, which until now has derived a significant portion of its revenue from graphics components. As of the second quarter of 2013, there were 52 AIB suppliers.[6] These AIB suppliers may market graphics cards under their own brands, or produce graphics cards for private label brands or produce graphics cards for computer manufacturers. Some AIB suppliers such as MSI build both AMD-based and Nvidia-based graphics cards. Others, such as EVGA, build only Nvidia-based graphics cards, while XFX, now builds only AMD-based graphics cards. Several AIB suppliers are also motherboard suppliers. The largest AIB suppliers, based on global retail market share for graphics cards, include Taiwan-based Palit Microsystems, Hong Kong-based PC Partner (which markets AMD-based graphics cards under its Sapphire brand and Nvidia-based graphics cards under its Zotac brand), Taiwan-based computer-maker Asus, Taiwan-based (MSI), Taiwan-based Gigabyte Technology,[41] Brea, California, USA-based EVGA (which also sells computer components such as power supplies) and Ontario, California USA-based XFX. (The parent corporation of XFX is based in Hong Kong.)Market[edit]

Graphics card shipments peaked at a total of 114 million in 1999. By contrast, they totaled 14.5 million units in the third quarter of 2013, a 17% fall from Q3 2012 levels,[40] and 44 million total in 2015. The sales of graphics cards have trended downward due to improvements in integrated graphics technologies; high-end, CPU-integrated graphics can provide performance competitive with low-end graphics cards. At the same time, graphics card sales have grown within the high-end segment, as manufacturers have shifted their focus to prioritize the gaming and enthusiast market.[41][42]

Beyond the gaming and multimedia segments, graphics cards have been increasingly used for general-purpose computing, such as big data processing.[43] The growth of cryptocurrency has placed a severely high demand on high-end graphics cards, especially in large quantities, due to their advantages in the process of mining. In January 2018, mid-to-high-end graphics cards experienced a major surge in price, with many retailers having stock shortages due to the significant demand among this market.[44][42][45] Graphics card companies released mining-specific cards designed to run 24 hours a day, seven days a week, and without video output ports.[4] The graphics card industry took a setback due to 2020-21 chip shortage.[46][47]Parts[edit]3dp chip la gi - 3dp chip la giA Radeon HD 7970 with the main heatsink removed, showing the major components of the card. The large, tilted silver object is the GPU die, which is surrounded by RAM chips, which are covered in extruded aluminum heatsinks. Power delivery circuitry is mounted next to the RAM, near the right side of the card.

A modern graphics card consists of a printed circuit board on which the components are mounted. These include:

Graphics Processing Unit[edit]

Main article: graphics processing unit

graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the building of images in a frame buffer intended for output to a display. Because of the large degree of programmable computational complexity for such a task, a modern graphics card is also a computer unto itself.3dp chip la gi - 3dp chip la giA half-height graphics card

Heat sink[edit]

A heat sink is mounted on most modern graphics cards. A heat sink spreads out the heat produced by the graphics processing unit evenly throughout the heat sink and unit itself. The heat sink commonly has a fan mounted as well to cool the heat sink and the graphics processing unit. Not all cards have heat sinks, for example, some cards are liquid-cooled and instead have a water block; additionally, cards from the 1980s and early 1990s did not produce much heat, and did not require heatsinks. Most modern graphics cards need a proper thermal solution. This can be the liquid solution or heatsinks with an additional connected heat pipe usually made of copper for the best thermal transfer. The correct case; either Mid-tower or Full-tower or some other derivative, has to be properly configured for thermal management. This can be ample space with a proper push-pull or opposite configuration as well as liquid with a radiator either in lieu or with a fan setup.

Có thể bạn quan tâm:  Virus mã hóa dữ liệu đòi tiền chuộc 2019

Video BIOS[edit]

The video BIOS or firmware contains a minimal program for the initial set up and control of the graphics card. It may contain information on the memory timing, operating speeds and voltages of the graphics processor, RAM, and other details which can sometimes be changed.

The modern Video BIOS does not support all the functions of the graphics card, being only sufficient to identify and initialize the card to display one of a few frame buffer or text display modes. It does not support YUV to RGB translation, video scaling, pixel copying, compositing or any of the multitude of other 2D and 3D features of the graphics card, which must be accessed by other software.

Video memory[edit]

Type Memory clock rate (MHz) Bandwidth (GB/s)
DDR 200-400 1.6-3.2
DDR2 400–1066.67 3.2-8.533
DDR3 800-2133.33 6.4-17.066
DDR4 1600-4866 12.8-25.6
GDDR4 3000–4000 160–256
GDDR5 1000–2000 288–336.5
GDDR5X 1000–1750 160–673
GDDR6 1365-1770 336-672
HBM 250–1000 512–1024

The memory capacity of most modern graphics cards ranges from 2 GB to 24 GB.[48] But with up to 32 GB as of the last 2010s, the applications for graphics use is becoming more powerful and widespread. Since video memory needs to be accessed by the GPU and the display circuitry, it often uses special high-speed or multi-port memory, such as VRAM, WRAM, SGRAM, etc. Around 2003, the video memory was typically based on DDR technology. During and after that year, manufacturers moved towards DDR2, GDDR3, GDDR4, GDDR5, GDDR5X, and GDDR6. The effective memory clock rate in modern cards is generally between 2 GHz to 15 GHz.

Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs.

RAMDAC[edit]

The RAMDAC, or random-access-memory digital-to-analog converter, converts digital signals to analog signals for use by a computer display that uses analog inputs such as cathode ray tube (CRT) displays. The RAMDAC is a kind of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, to minimize flicker.[49] (With LCD displays, flicker is not a problem.[citation needed]) Due to the growing popularity of digital computer displays and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component. All current LCD/plasma monitors and TVs and projectors with only digital connections, work in the digital domain and do not require a RAMDAC for those connections. There are displays that feature analog inputs (VGA, component, SCART, etc.) only. These require a RAMDAC, but they reconvert the analog signal back to digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analog-to-digital conversion.[citation needed] With VGA standard being phased out in favor of digital, RAMDACs are beginning to disappear from graphics cards.[citation needed]3dp chip la gi - 3dp chip la giA Radeon HD 5850 with a DisplayPort, HDMI, and two DVI ports

Output interfaces[edit]

3dp chip la gi - 3dp chip la giVideo In Video Out (VIVO) for S-Video (TV-out), Digital Visual Interface (DVI) for High-definition television (HDTV), and DE-15 for Video Graphics Array (VGA)

The most common connection systems between the graphics card and the computer display are:

Video Graphics Array (VGA) (DE-15)[edit]

3dp chip la gi - 3dp chip la giVideo Graphics Array (VGA) ( DE-15)Main article: Video Graphics Array

Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector. Some problems of this standard are electrical noise, image distortion and sampling error in evaluating pixels.

Today, the VGA analog interface is used for high definition video including 1080p and higher. While the VGA transmission bandwidth is high enough to support even higher resolution playback, the picture quality can degrade depending on cable quality and length. How discernible this quality difference is depends on the individual’s eyesight and the display; when using a DVI or HDMI connection, especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image Constraint Token (ICT) is not enabled on the Blu-ray disc.

Digital Visual Interface (DVI)[edit]

3dp chip la gi - 3dp chip la giDigital Visual Interface (DVI-I)Main article: Digital Visual Interface

Digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide high-definition television displays) and video projectors. In some rare cases, high-end CRT monitors also use DVI. It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display pixel, using its native resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.

Video In Video Out (VIVO) for S-Video, Composite video and Component video[edit]

Main articles: S-Video, Composite video, and Component video

Included to allow connection with televisions, DVD players, video recorders and video game consoles. They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-Video in and out + composite video in and out), or 6 connectors (S-Video in and out + component PB out + component PR out + component Y out [also composite out] + composite in).

High-Definition Multimedia Interface (HDMI)[edit]

3dp chip la gi - 3dp chip la giHigh-Definition Multimedia Interface (HDMI)Main article: HDMI

HDMI is a compact audio/video interface for transferring uncompressed video data and compressed/uncompressed digital audio data from an HDMI-compliant device (“the source device”) to a compatible digital audio device, computer monitor, video projector, or digital television.[50] HDMI is a digital replacement for existing analog video standards. HDMI supports copy protection through HDCP.

DisplayPort[edit]

3dp chip la gi - 3dp chip la giDisplayPortMain article: DisplayPort

DisplayPort is a digital display interface developed by the Video Electronics Standards Association (VESA). The interface is primarily used to connect a video source to a display device such as a computer monitor, though it can also be used to transmit audio, USB, and other forms of data.[51] The VESA specification is royalty-free. VESA designed it to replace VGA, DVI, and LVDS. Backward compatibility to VGA and DVI by using adapter dongles enables consumers to use DisplayPort fitted video sources without replacing existing display devices. Although DisplayPort has a greater throughput of the same functionality as HDMI, it is expected to complement the interface, not replace it.[52][53]

USB-C[edit]

Main article: USB-C

Other types of connection systems[edit]

3dp chip la gi - 3dp chip la gi

Motherboard interfaces[edit]

Main articles: Bus (computing) and Expansion card

Chronologically, connection systems between graphics card and motherboard were, mainly:

  • S-100 bus: Designed in 1974 as a part of the Altair 8800, it is the first industry-standard bus for the microcomputer industry.
  • ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It is an 8- or 16-bit bus clocked at 8 MHz.
  • NuBus: Used in Macintosh II, it is a 32-bit bus with an average bandwidth of 10 to 20 MB/s.
  • MCA: Introduced in 1987 by IBM it is a 32-bit bus clocked at 10 MHz.
  • EISA: Released in 1988 to compete with IBM’s MCA, it was compatible with the earlier ISA bus. It is a 32-bit bus clocked at 8.33 MHz.
  • VLB: An extension of ISA, it is a 32-bit bus clocked at 33 MHz. Also referred to as VESA.
  • PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic connectivity between devices, avoiding the manual adjustments required with jumpers. It is a 32-bit bus clocked 33 MHz.
  • UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It is a 64-bit bus clocked at 67 or 83 MHz.
  • USB: Although mostly used for miscellaneous devices, such as secondary storage devices and toys, USB displays and display adapters exist.
  • AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz.
  • PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the width of bus to 64 bits and the clock frequency to up to 133 MHz.
  • PCI Express: Abbreviated as PCIe, it is a point-to-point interface released in 2004. In 2006 provided double the data-transfer rate of AGP. It should not be confused with PCI-X, an enhanced version of the original PCI specification.

The following table is a comparison between a selection of the features of some of those interfaces.See also: List of device bandwidths § Computer buses3dp chip la gi - 3dp chip la giATI Graphics Solution Rev 3 from 1985/1986, supporting Hercules graphics. As can be seen from the PCB the layout was done in 1985, whereas the marking on the central chip CW16800-A says “8639” meaning that chip was manufactured week 39, 1986. This card is using the ISA 8-bit (XT) interface.

Bus Width (bits) Clock rate (MHz) Bandwidth (MB/s) Style
ISA XT 8 4.77 8 Parallel
ISA AT 16 8.33 16 Parallel
MCA 32 10 20 Parallel
NUBUS 32 10 10–40 Parallel
EISA 32 8.33 32 Parallel
VESA 32 40 160 Parallel
PCI 32–64 33–100 132–800 Parallel
AGP 1x 32 66 264 Parallel
AGP 2x 32 66 528 Parallel
AGP 4x 32 66 1000 Parallel
AGP 8x 32 66 2000 Parallel
PCIe x1 1 2500 / 5000 250 / 500 Serial
PCIe x4 1 × 4 2500 / 5000 1000 / 2000 Serial
PCIe x8 1 × 8 2500 / 5000 2000 / 4000 Serial
PCIe x16 1 × 16 2500 / 5000 4000 / 8000 Serial
PCIe ×1 2.0[57] 1 500 / 1000 Serial
PCIe x4 2.0 1 × 4 2000 / 4000 Serial
PCIe x8 2.0 1 × 8 4000 / 8000 Serial
PCIe ×16 2.0 1 × 16 5000 / 10000 8000 / 16000 Serial
PCIe ×1 3.0 1 1000 / 2000 Serial
PCIe ×4 3.0 1 × 4 4000 / 8000 Serial
PCIe ×8 3.0 1 × 8 8000 / 16000 Serial
PCIe ×16 3.0 1 × 16 16000 / 32000 Serial

See also[edit]

  • List of computer hardware
  • List of graphics card manufacturers
  • Computer display standards – a detailed list of standards like SVGA, WXGA, WUXGA, etc.
  • AMD (ATI), Nvidia – quasi duopoly of 3D chip GPU and graphics card designers
  • GeForce, Radeon – examples of popular graphics card series
  • GPGPU (i.e.: CUDA, AMD FireStream)
  • Framebuffer – the computer memory used to store a screen image
  • Capture card – the inverse of a graphics card

References[edit]

  1. ^ “ExplainingComputers.com: Hardware”. www.explainingcomputers.com. Archived from the original on 2017-12-17. Retrieved 2017-12-11.
  2. ^ Jump up to: a b “OpenGL vs DirectX – Cprogramming.com”. www.cprogramming.com. Archived from the original on 2017-12-12. Retrieved 2017-12-11.
  3. ^ “Powering Change with NVIDIA AI and Data Science”. NVIDIA. Archived from the original on 2020-11-10. Retrieved 2020-11-10.
  4. ^ Jump up to: a b Parrish, Kevin (2017-07-10). “Graphics cards dedicated to cryptocurrency mining are here, and we have the list”. Digital Trends. Archived from the original on 2020-08-01. Retrieved 2020-01-16.
  5. ^
  6. ^ Jump up to: a b c d “Add-in board-market down in Q2, AMD gains market share [Press Release]”. Jon Peddie Research. 16 August 2013. Archived from the original on 3 December 2013. Retrieved 30 November 2013.
  7. ^
  8. ^
  9. ^
  10. ^ Crijns, Koen (6 September 2013). “Intel Iris Pro 5200 graphics review: the end of mid-range GPUs?”. hardware.info. Archived from the original on 3 December 2013. Retrieved 30 November 2013.
  11. ^ “Introducing The GeForce GTX 780 Ti”. Archived from the original on 3 December 2013. Retrieved 30 November 2013.
  12. ^
  13. ^ “Faster, Quieter, Lower: Power Consumption and Noise Level of Contemporary Graphics Cards”. xbitlabs.com. Archived from the original on 2011-09-04.
  14. ^ “Video Card Power Consumption”. codinghorror.com. Archived from the original on 2008-09-08. Retrieved 2008-09-15.
  15. ^ Maxim Integrated Products. “Power-Supply Management Solution for PCI Express x16 Graphics 150W-ATX Add-In Cards”. Archived from the original on 2009-12-05. Retrieved 2007-02-17.
  16. ^ “What is a Low Profile Video Card?”. Outletapex. Archived from the original on 2020-07-24. Retrieved 2020-04-29.
  17. ^ “Best ‘low profile’ graphics card”. Tom’s Hardware. Archived from the original on 2013-02-19. Retrieved 2012-12-06.
  18. ^ “GTX 690 | Specifications”. GeForce. Archived from the original on 2013-03-08. Retrieved 2013-02-28.
  19. ^ “SLI”. geforce.com. Archived from the original on 2013-03-15. Retrieved 2013-03-13.
  20. ^ “SLI vs. CrossFireX: The DX11 generation”. techreport.com. Archived from the original on 2013-02-27. Retrieved 2013-03-13.
  21. ^ Adrian Kingsley-Hughes. “NVIDIA GeForce GTX 680 in quad-SLI configuration benchmarked”. ZDNet. Archived from the original on 2013-02-07. Retrieved 2013-03-13.
  22. ^ “Head to Head: Quad SLI vs. Quad CrossFireX”. Maximum PC. Archived from the original on 2012-08-10. Retrieved 2013-03-13.
  23. ^ “How to Build a Quad SLI Gaming Rig | GeForce”. www.geforce.com. Archived from the original on 2017-12-26. Retrieved 2017-12-11.
  24. ^ Jump up to: a b “How to Build a Quad SLI Gaming Rig | GeForce”. www.geforce.com. Archived from the original on 2017-12-26. Retrieved 2017-12-11.
  25. ^ “NVIDIA Quad-SLI|NVIDIA”. www.nvidia.com. Archived from the original on 2017-12-12. Retrieved 2017-12-11.
  26. ^ Abazovic, Fuad. “Crossfire and SLI market is just 300.000 units”. www.fudzilla.com. Archived from the original on 2020-03-03. Retrieved 2020-03-03.
  27. ^ “Is Multi-GPU Dead?”. Tech Altar. January 7, 2018. Archived from the original on March 27, 2020. Retrieved March 3, 2020.
  28. ^ “Nvidia SLI and AMD CrossFire is dead – but should we mourn multi-GPU gaming? | TechRadar”. www.techradar.com. Archived from the original on 2020-03-03. Retrieved 2020-03-03.
  29. ^ “Hardware Selection and Configuration Guide” (PDF). documents.blackmagicdesign.com. Archived (PDF) from the original on 2020-11-11. Retrieved 2020-11-10.
  30. ^ “Recommended System: Recommended Systems for DaVinci Resolve”. Puget Systems. Archived from the original on 2020-03-03. Retrieved 2020-03-03.
  31. ^ “GPU Accelerated Rendering and Hardware Encoding”. helpx.adobe.com. Archived from the original on 2020-03-03. Retrieved 2020-03-03.
  32. ^ “V-Ray Next Multi-GPU Performance Scaling”. Puget Systems. Archived from the original on 2020-03-03. Retrieved 2020-03-03.
  33. ^ “FAQ | GPU-accelerated 3D rendering software | Redshift”. www.redshift3d.com. Archived from the original on 2020-04-11. Retrieved 2020-03-03.
  34. ^ “OctaneRender 2020 Preview is here!”. Archived from the original on 2020-03-07. Retrieved 2020-03-03.
  35. ^ Williams, Rob. “Exploring Performance With Autodesk’s Arnold Renderer GPU Beta – Techgage”. techgage.com. Archived from the original on 2020-03-03. Retrieved 2020-03-03.
  36. ^ “GPU Rendering — Blender Manual”. docs.blender.org. Archived from the original on 2020-04-16. Retrieved 2020-03-03.
  37. ^ “V-Ray for Nuke – Ray Traced Rendering for Compositors | Chaos Group”. www.chaosgroup.com. Archived from the original on 2020-03-03. Retrieved 2020-03-03.
  38. ^ “System Requirements | Nuke | Foundry”. www.foundry.com. Archived from the original on 2020-08-01. Retrieved 2020-03-03.
  39. ^ “What about multi-GPU support?”. Archived from the original on 2021-01-18. Retrieved 2020-11-10.
  40. ^ Jump up to: a b “Graphics Card Market Up Sequentially in Q3, NVIDIA Gains as AMD Slips”. Archived from the original on 28 November 2013. Retrieved 30 November 2013.
  41. ^ Jump up to: a b Chen, Monica (16 April 2013). “Palit, PC Partner surpass Asustek in graphics card market share”. DIGITIMES. Archived from the original on 7 September 2013. Retrieved 1 December 2013.
  42. ^ Jump up to: a b
  43. ^
  44. ^
  45. ^
  46. ^ GetNews. “How Graphics Card shortage is killing PC Gaming”. Digital Journal. Archived from the original on 2021-09-01. Retrieved 2021-09-01.
  47. ^ “How Graphics Card shortage is killing PC Gaming”. MarketWatch. Archived from the original on 2021-09-01. Retrieved 2021-09-01.
  48. ^ “NVIDIA TITAN RTX is Here”. NVIDIA. Archived from the original on 2019-11-08. Retrieved 2019-11-07.
  49. ^ “Refresh rate recommended”. Archived from the original on 2007-01-02. Retrieved 2007-02-17.
  50. ^ “HDMI FAQ”. HDMI.org. Archived from the original on 2018-02-22. Retrieved 2007-07-09.
  51. ^ “DisplayPort Technical Overview” (PDF). VESA.org. January 10, 2011. Archived (PDF) from the original on 12 November 2020. Retrieved 23 January 2012.
  52. ^
  53. ^ “The Truth About DisplayPort vs. HDMI”. dell.com. Archived from the original on 2014-03-01. Retrieved 2013-03-13.
  54. ^ “Video Signals and Connectors”. Apple. Archived from the original on 26 March 2018. Retrieved 29 January 2016.
  55. ^ “How to Connect Component Video to a VGA Projector”. AZCentral. Archived from the original on 18 September 2021. Retrieved 29 January 2016.
  56. ^ “Quality Difference Between Component vs. HDMI”. Extreme Tech. Archived from the original on 4 February 2016. Retrieved 29 January 2016.
  57. ^ PCIe 2.1 has the same clock and bandwidth as PCIe 2.0

Sources[edit]

  • Mueller, Scott (2005) Upgrading and Repairing PCs. 16th edition. Que Publishing. ISBN 0-7897-3173-8

External links[edit]

3dp chip la gi - 3dp chip la gi
3dp chip la gi - 3dp chip la gi
  • How Graphics Cards Work at HowStuffWorks
  • Large image of graphic card history tree
hidevteBasic computer components
Input devices Pointing devicesGraphics tabletGame controllerLight penMouseOpticalOptical trackpadPointing stickTouchpadTouchscreenTrackballKeyboardImage scannerGraphics cardGPUMicrophoneRefreshable braille displaySound cardSound chipWebcamSoftcam
Output devices MonitorScreenRefreshable braille displayPrinterPlotterSpeakersSound cardGraphics card
Removable
data storage
Disk packFloppy diskOptical discCDDVDBlu-rayFlash memoryMemory cardUSB flash drive
Computer case Central processing unitMicroprocessorMotherboardMemoryRAMBIOSData storageHDDSSDSSHDPower supplySMPSMOSFETPower MOSFETVRMNetwork interface controllerFax modemExpansion card
Ports EthernetFireWire (IEEE 1394)Parallel portSerial portPS/2 portUSBThunderboltDisplayPort / HDMI / DVI / VGAeSATAAudio jack

Categories:

  • Graphics hardware
  • Graphics cards

Hidden categories:

  • Articles with short description
  • Short description is different from Wikidata
  • All articles with unsourced statements
  • Articles with unsourced statements from August 2018
  • Articles with unsourced statements from December 2012
  • Articles with unsourced statements from January 2019
  • Articles with unsourced statements from January 2017
  • Commons link is on Wikidata
  • Articles with GND identifiers

3DP Chip 21.10 Tìm và cập nhật driver cho máy tính

Giới thiệu Thông số Tải về

3DP Chip là một chương trình cho phép tự động phát hiện và hiển thị thông tin CPU, bo mạch chủ, card đồ họa và card âm thanh, Ethernet Card… được cài đặt trên máy tính người dùng. Đồng thời, nếu máy tính của bạn đang kết nối Internet, bạn có thể tải về bộ driver mới nhất cho các thành phần trên.

Phần mềm này có một giao diện thân thiện hiển thị các thông tin về thành phần của hệ thống máy tính. Tính năng chính của 3DP Chip là tự động dò tìm thông tin của hệ thống driver phần cứng, đưa ra các tùy chọn để cập nhật những driver đó lên phiên bản mới nhất.

Đồng thời, bạn có thể sao chép thông tin hệ thống sang clipboard của 3DP Chip và gửi báo cáo lỗi cho nhà phát triển phần mềm.

3dp chip la gi - 3dp chip la giTải 3DP Chip – Cập nhật Driver cho máy tính.

Ví dụ như hình trên, khi khởi động 3DP Chip, các bạn có thể nhìn thấy tab HOME trên giao diện chính là thông tin về phần cứng: chip, RAM, VGA, hệ điều hành… và những dòng có dấu chấm than vàng (ở ảnh trên là card mạng) là cần update driver đó.Hướng dẫn sử dụng 3DP Chip

Giao diện chính: khi bạn bấm vào tab Driver, sẽ có 2 tùy chọn:

  • Backup: sao lưu driver của máy
  • Restore: phục hồi lại driver hệ thống
3dp chip la gi - 3dp chip la gi

Khi chọn backup thì bảng như hình dưới sẽ hiện ra:

3dp chip la gi - 3dp chip la gi

Các bạn chọn những loại driver nào cần sao lưu, rồi nhấn nút Start Backup để bắt đầu. Và nếu muốn kiểm tra, cập nhật thông tin về driver máy tính, các bạn hãy di chuột vào vị trí của driver bị dánh dấu chấm than vàng (như hình dưới), 3DP Chip sẽ hiển thị thông tin chính xác về số phiên bản và ngày cập nhật tương ứng.

3dp chip la gi - 3dp chip la gi

Sau đó, bấm vào chi tiết cần cập nhật, 3DP Chip sẽ hiển thị trang web cho phép bạn tải và cập nhật driver tương ứng.Review – Đánh giá về 3DP Chip: Tìm kiếm và cập nhật driver cho máy tính

Nhìn chung, 3DP Chip là một phần mềm hữu ích, nhưng bạn phải click vào từng tên thiết bị rồi truy cập các trang web tương ứng để kiểm tra xem có bản cập nhật cần thiết hay không. Nếu như tất cả các thông tin này được hiển thị ngay trong giao diện chính của phần mềm thì sẽ tốt hơn.

Ưu điểm

  • Tìm kiếm driver nhanh chóng.
  • Hỗ trợ sao chép vào clipboard.
  • Cập nhật driver dễ dàng.

Nhược điểm

  • Giao diện khó hiểu.
  • Cài đặt sản phẩm của bên thứ ba.

Computer simulation

From Wikipedia, the free encyclopediaJump to navigation Jump to searchThis article is about computer model within a scientific context. For simulating a computer on a computer, see emulator.”Computer model” redirects here. For computer models of 3 dimensional objects, see 3D modeling.3dp chip la gi - 3dp chip la giA 48-hour computer simulation of Typhoon Mawar using the Weather Research and Forecasting model3dp chip la gi - 3dp chip la giProcess of building a computer model, and the interplay between experiment, simulation, and theory.

Computer simulation is the process of mathematical modelling, performed on a computer, which is designed to predict the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system’s model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.[1]

Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program.[2] Other examples include a 1-billion-atom model of material deformation;[3] a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005;[4] a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.[5]

Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.[6]Simulation versus model[edit]

A computer model is the algorithms and equations used to capture the behavior of the system being modeled. By contrast, computer simulation is the actual running of the program that contains these equations or algorithms. Simulation, therefore, is the process of running a model. Thus one would not “build a simulation”; instead, one would “build a model(or a simulator)”, and then either “run the model” or equivalently “run a simulation”.History[edit]

Có thể bạn quan tâm:  Apkpure là gì

Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.[7]Data preparation[edit]

The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).

Input sources also vary widely:

  • Sensors and other physical devices connected to the model;
  • Control surfaces used to direct the progress of the simulation in some way;
  • Current or historical data entered by hand;
  • Values extracted as a by-product from other processes;
  • Values output for the purpose by other simulations, models, or processes.

Lastly, the time at which data is available varies:

  • “invariant” data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
  • data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
  • data can be provided during the simulation run, for example by a sensor network.

Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula. There are now many others.

Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as “error bars”, a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an “error analysis”[8] to confirm that values output by the simulation will still be usefully accurate.Types[edit]

Computer models can be classified according to several independent pairs of attributes, including:

  • Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
  • Steady-state or dynamic
  • Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
  • Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
  • Local or distributed.

Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:

  • Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
  • If the underlying graph is not a regular grid, the model may belong to the meshfree method class.

Equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.

  • Dynamic simulations model changes in a system in response to (usually changing) input signals.
  • Stochastic models use random number generators to model chance or random events;
  • discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
  • continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most “analog” simulations were run on conventional digital computers that emulate the behavior of an analog computer.
  • A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent’s state is updated from one time-step to the next.
  • Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as “distributed simulations”. There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).

Visualization[edit]

Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and “see that rain was headed their way”) much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.

Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.

Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.Computer simulation in science[edit]3dp chip la gi - 3dp chip la giComputer simulation of the process of osmosis

Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:

  • a numerical simulation of differential equations that cannot be solved analytically, theories that involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g., climate models, roadway noise models, roadway air dispersion models), continuum mechanics and chemical kinetics fall into this category.
  • a stochastic simulation, typically used for discrete systems where events occur probabilistically and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical[9] or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method).
  • multiparticle simulation of the response of nanomaterials at multiple scales to an applied force for the purpose of modeling their thermoelastic and thermodynamic properties. Techniques used for such simulations are Molecular dynamics, Molecular mechanics, Monte Carlo method, and Multiscale Green’s function.

Specific examples of computer simulations follow:

  • statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
  • agent based simulation has been used effectively in ecology, where it is often called “individual based modeling” and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
  • time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
  • computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
  • computer simulation using molecular modeling for drug discovery.[10]
  • computer simulation to model viral infection in mammalian cells.[9]
  • computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.[11]
  • Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
  • An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.

Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows’ World3 used in the Limits to Growth, James Lovelock’s Daisyworld and Thomas Ray’s Tierra.

In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology,[12] which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling [13]Computer simulation in practical contexts[edit]

Computer simulations are used in a wide variety of practical contexts, such as:

  • analysis of air pollutant dispersion using atmospheric dispersion modeling
  • design of complex systems such as aircraft and also logistics systems.
  • design of noise barriers to effect roadway noise mitigation
  • modeling of application performance[14]
  • flight simulators to train pilots
  • weather forecasting
  • forecasting of risk
  • simulation of electrical circuits
  • Power system simulation
  • simulation of other computers is emulation.
  • forecasting of prices on financial markets (for example Adaptive Modeler)
  • behavior of structures (such as buildings and industrial parts) under stress and other conditions
  • design of industrial processes, such as chemical processing plants
  • strategic management and organizational studies
  • reservoir simulation for the petroleum engineering to model the subsurface reservoir
  • process engineering simulation tools.
  • robot simulators for the design of robots and robot control algorithms
  • urban simulation models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies.
  • traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network to transportation system planning, design and operations. See a more detailed article on Simulation in Transportation.
  • modeling car crashes to test safety mechanisms in new vehicle models.
  • crop-soil systems in agriculture, via dedicated software frameworks (e.g. BioMA, OMS3, APSIM)

The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.

Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.[15]

Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.

In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar “hard to detect” errors as well as produce performance information and tuning data.Pitfalls[edit]

Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.

Model calibration techniques[edit]

The following three steps should be used to produce accurate simulation models: calibration, verification, and validation. Computer simulations are good at portraying and comparing theoretical scenarios, but in order to accurately model actual case studies they have to match what is actually happening today. A base model should be created and calibrated so that it matches the area being studied. The calibrated model should then be verified to ensure that the model is operating as expected based on the inputs. Once the model has been verified, the final step is to validate the model by comparing the outputs to historical data from the study area. This can be done by using statistical techniques and ensuring an adequate R-squared value. Unless these techniques are employed, the simulation model created will produce inaccurate results and not be a useful prediction tool.

Model calibration is achieved by adjusting any available parameters in order to adjust how the model operates and simulates the process. For example, in traffic simulation, typical parameters include look-ahead distance, car-following sensitivity, discharge headway, and start-up lost time. These parameters influence driver behavior such as when and how long it takes a driver to change lanes, how much distance a driver leaves between his car and the car in front of it, and how quickly a driver starts to accelerate through an intersection. Adjusting these parameters has a direct effect on the amount of traffic volume that can traverse through the modeled roadway network by making the drivers more or less aggressive. These are examples of calibration parameters that can be fine-tuned to match characteristics observed in the field at the study location. Most traffic models have typical default values but they may need to be adjusted to better match the driver behavior at the specific location being studied.

Model verification is achieved by obtaining output data from the model and comparing them to what is expected from the input data. For example, in traffic simulation, traffic volume can be verified to ensure that actual volume throughput in the model is reasonably close to traffic volumes input into the model. Ten percent is a typical threshold used in traffic simulation to determine if output volumes are reasonably close to input volumes. Simulation models handle model inputs in different ways so traffic that enters the network, for example, may or may not reach its desired destination. Additionally, traffic that wants to enter the network may not be able to, if congestion exists. This is why model verification is a very important part of the modeling process.

The final step is to validate the model by comparing the results with what is expected based on historical data from the study area. Ideally, the model should produce similar results to what has happened historically. This is typically verified by nothing more than quoting the R-squared statistic from the fit. This statistic measures the fraction of variability that is accounted for by the model. A high R-squared value does not necessarily mean the model fits the data well. Another tool used to validate models is graphical residual analysis. If model output values drastically differ from historical values, it probably means there is an error in the model. Before using the model as a base to produce additional models, it is important to verify it for different scenarios to ensure that each one is accurate. If the outputs do not reasonably match historic values during the validation process, the model should be reviewed and updated to produce results more in line with expectations. It is an iterative process that helps to produce more realistic models.

Validating traffic simulation models requires comparing traffic estimated by the model to observed traffic on the roadway and transit systems. Initial comparisons are for trip interchanges between quadrants, sectors, or other large areas of interest. The next step is to compare traffic estimated by the models to traffic counts, including transit ridership, crossing contrived barriers in the study area. These are typically called screenlines, cutlines, and cordon lines and may be imaginary or actual physical barriers. Cordon lines surround particular areas such as a city’s central business district or other major activity centers. Transit ridership estimates are commonly validated by comparing them to actual patronage crossing cordon lines around the central business district.

Three sources of error can cause weak correlation during calibration: input error, model error, and parameter error. In general, input error and parameter error can be adjusted easily by the user. Model error however is caused by the methodology used in the model and may not be as easy to fix. Simulation models are typically built using several different modeling theories that can produce conflicting results. Some models are more generalized while others are more detailed. If model error occurs as a result, in may be necessary to adjust the model methodology to make results more consistent.

In order to produce good models that can be used to produce realistic results, these are the necessary steps that need to be taken in order to ensure that simulation models are functioning properly. Simulation models can be used as a tool to verify engineering theories, but they are only valid if calibrated properly. Once satisfactory estimates of the parameters for all models have been obtained, the models must be checked to assure that they adequately perform the intended functions. The validation process establishes the credibility of the model by demonstrating its ability to replicate reality. The importance of model validation underscores the need for careful planning, thoroughness and accuracy of the input data collection program that has this purpose. Efforts should be made to ensure collected data is consistent with expected values. For example, in traffic analysis it is typical for a traffic engineer to perform a site visit to verify traffic counts and become familiar with traffic patterns in the area. The resulting models and forecasts will be no better than the data used for model estimation and validation.See also[edit]

  • Computational model
  • Emulator
  • Energy modeling
  • Illustris project
  • List of computer simulation software
  • Scene generator
  • Stencil code
  • UniverseMachine
  • Virtual prototyping
  • Web-based simulation
  • Digital Twin
  • Simulation video game
  • Simulation hypothesis
  • Virtual reality

References[edit]

3dp chip la gi - 3dp chip la gi
  1. ^ Strogatz, Steven (2007). “The End of Insight”. In Brockman, John (ed.). What is your dangerous idea?. HarperCollins. ISBN 9780061214950.
  2. ^ ” “Researchers stage largest Military Simulation ever” Archived 2008-01-22 at the Wayback Machine, Jet Propulsion Laboratory, Caltech, December 1997,
  3. ^ “Molecular Simulation of Macroscopic Phenomena”. Archived from the original on 2013-05-22.
  4. ^ “Largest computational biology simulation mimics life’s most essential nanomachine” (news), News Release, Nancy Ambrosiano, Los Alamos National Laboratory, Los Alamos, NM, October 2005, webpage: LANL-Fuse-story7428 Archived 2007-07-04 at the Wayback Machine.
  5. ^ “Mission to build a simulated brain begins” Archived 2015-02-09 at the Wayback Machine, project of the institute at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, New Scientist, June 2005.
  6. ^ Santner, Thomas J; Williams, Brian J; Notz, William I (2003). The design and analysis of computer experiments. Springer Verlag.
  7. ^ Bratley, Paul; Fox, Bennet L.; Schrage, Linus E. (2011-06-28). A Guide to Simulation. Springer Science & Business Media. ISBN 9781441987242.
  8. ^ John Robert Taylor (1999). An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. University Science Books. pp. 128–129. ISBN 978-0-935702-75-0. Archived from the original on 2015-03-16.
  9. ^ Jump up to: a b Gupta, Ankur; Rawlings, James B. (April 2014). “Comparison of Parameter Estimation Methods in Stochastic Chemical Kinetic Models: Examples in Systems Biology”. AIChE Journal60 (4): 1253–1268. doi:10.1002/aic.14409. ISSN 0001-1541. PMC 4946376. PMID 27429455.
  10. ^ Atanasov, AG; Waltenberger, B; Pferschy-Wenzig, EM; Linder, T; Wawrosch, C; Uhrin, P; Temml, V; Wang, L; Schwaiger, S; Heiss, EH; Rollinger, JM; Schuster, D; Breuss, JM; Bochkov, V; Mihovilovic, MD; Kopp, B; Bauer, R; Dirsch, VM; Stuppner, H (2015). “Discovery and resupply of pharmacologically active plant-derived natural products: A review”. Biotechnol Adv33 (8): 1582–614. doi:10.1016/j.biotechadv.2015.08.001. PMC 4748402. PMID 26281720.
  11. ^ Mizukami, Koichi ; Saito, Fumio ; Baron, Michel. Study on grinding of pharmaceutical products with an aid of computer simulation Archived 2011-07-21 at the Wayback Machine
  12. ^ Mesly, Olivier (2015). Creating Models in Psychological Research. United States: Springer Psychology: 126 pages. ISBN 978-3-319-15752-8
  13. ^ Wilensky, Uri; Rand, William (2007). “Making Models Match: Replicating an Agent-Based Model”. Journal of Artificial Societies and Social Simulation10 (4): 2.
  14. ^ Wescott, Bob (2013). The Every Computer Performance Book, Chapter 7: Modeling Computer Performance. CreateSpace. ISBN 978-1482657753.
  15. ^ Baase, Sara. A Gift of Fire: Social, Legal, and Ethical Issues for Computing and the Internet. 3. Upper Saddle River: Prentice Hall, 2007. Pages 363–364. ISBN 0-13-600848-8.
Có thể bạn quan tâm:  Tắt kiểm tra chính tả trong powerpoint 2010

Further reading[edit]

3dp chip la gi - 3dp chip la gi
  • Young, Joseph and Findley, Michael. 2014. “Computational Modeling to Study Conflicts and Terrorism.” Routledge Handbook of Research Methods in Military Studies edited by Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. pp. 249–260. New York: Routledge,
  • R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy.
  • E. Winsberg Simulation in Science. Entry in the Stanford Encyclopedia of Philosophy.
  • S. Hartmann, The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77–100.
  • E. Winsberg, Science in the Age of Computer Simulation. Chicago: University of Chicago Press, 2010.
  • P. Humphreys, Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press, 2004.
  • James J. Nutaro (2011). Building Software for Simulation: Theory and Algorithms, with Applications in C++. John Wiley & Sons. ISBN 978-1-118-09945-2.
  • Desa, W. L. H. M., Kamaruddin, S., & Nawawi, M. K. M. (2012). Modeling of Aircraft Composite Parts Using Simulation. Advanced Material Research, 591–593, 557–560.

External links[edit]

  • Guide to the Computer Simulation Oral History Archive 2003-2018
showvteComputer simulation software
showvteEnergy modeling

Categories:

  • Computational science
  • Scientific modeling
  • Simulation software
  • Virtual reality
  • Alternatives to animal testing
  • Computational fields of study

Hidden categories:

  • Webarchive template wayback links
  • Articles with short description
  • Short description is different from Wikidata
  • Articles lacking in-text citations from May 2008
  • All articles lacking in-text citations
  • Commons category link from Wikidata
  • Articles with GND identifiers
  • Articles with BNE identifiers
  • Articles with BNF identifiers
  • Articles with LCCN identifiers
  • Articles with MA identifiers

Installation (computer programs)

From Wikipedia, the free encyclopediaJump to navigation Jump to search”Installer” redirects here. For for the AmigaOS scripting language, see Installer (programming language).

3dp chip la gi - 3dp chip la gi

Installation (or setup) of a computer program (including device drivers and plugins), is the act of making the program ready for execution. Installation refers to the particular configuration of a software or hardware with a view to making it usable with the computer. A soft or digital copy of the piece of software (program) is needed to install it. There are different processes of installing a piece of software (program). Because the process varies for each program and each computer, programs (including operating systems) often come with an installer, a specialised program responsible for doing whatever is needed (see below) for the installation. Installation may be part of a larger software deployment process.

Installation typically involves code (program) being copied/generated from the installation files to new files on the local computer for easier access by the operating system, creating necessary directories, registering environment variables, providing separate program for un-installation etc. Because code is generally copied/generated in multiple locations, uninstallation usually involves more than just erasing the program folder. For example, registry files and other system code may need to be modified or deleted for a complete uninstallation.Overview[edit]

Some computer programs can be executed by simply copying them into a folder stored on a computer and executing them. Other programs are supplied in a form unsuitable for immediate execution and therefore need an installation procedure. Once installed, the program can be executed again and again, without the need to reinstall before each execution.

Common operations performed during software installations include:

  • Making sure that necessary system requirements are met
  • Checking for existing versions of the software
  • Creating or updating program files and folders
  • Adding configuration data such as configuration files, Windows registry entries or environment variables
  • Making the software accessible to the user, for instance by creating links, shortcuts or bookmarks
  • Configuring components that run automatically, such as daemons or Windows services
  • Performing product activation
  • Updating the software versions

These operations may require some charges or be free of charge. In case of payment, installation costs means the costs connected and relevant to or incurred as a result of installing the drivers or the equipment in the customers’ premises.

Some installers may attempt to trick users into installing junkware such as various forms of adware, toolbars, trialware or software of partnering companies.[1] To prevent this, extra caution on what exactly is being asked to be installed is needed. The installation of additional software then can simply be skipped or unchecked (this may require the user to use the “custom”, “detailed” or “expert” version of the installation procedure).[1]
Such malicious conduct is not necessarily a decision by the software developers or their company but can also be an issue of external installers such as the Download.com installer by CNET.[2]Necessity[edit]

As mentioned earlier, some computer programs need no installation. This was once usual for many programs which run on DOS, Mac OS, Atari TOS and AmigaOS. As computing environments grew more complex and fixed hard drives replaced floppy disks, the need for tangible installation presented itself. For example Commodore released the Installer for Amiga.

A class of modern applications that do not need installation are known as portable applications, as they may be roamed around onto different computers and run. Similarly, there are live operating systems, which do not need installation and can be run directly from a bootable CD, DVD, USB flash drive or loaded over the network as with thin clients. Examples are AmigaOS 4.0, various Linux distributions, MorphOS or Mac OS versions 1.0 through 9.0. (See live CD and live USB.) Finally, web applications, which run inside a web browser, do not need installation.Types[edit]

Attended installation[edit]

On Windows systems, this is the most common form of installation. An installation process usually needs a user who attends it to make choices, such as accepting or declining an end-user license agreement (EULA), specifying preferences such as the installation location, supplying passwords or assisting in product activation. In graphical environments, installers that offer a wizard-based interface are common. Attended installers may ask users to help mitigate the errors. For instance, if the disk in which the computer program is being installed was full, the installer may ask the user to specify another target path or clear enough space in the disk. A common misconception is unarchivation, which is not considered an installation action because it does not include user choices, such as accepting or declining EULA.

Silent installation[edit]

Installation that does not display messages or windows during its progress. “Silent installation” is not the same as “unattended installation” (see below): All silent installations are unattended but not all unattended installations are silent. The reason behind a silent installation may be convenience or subterfuge. Malware is almost always installed silently.[citation needed] For normal users silent installation is of not much use , but in bigger organizations where thousands of users work, deploying the applications becomes a typical task and for that reason silent installation is performed so that the application is installed in background without affecting the work of user. Silent parameters can vary from software to software, if a software/application has silent parameters , it can be checked by ” <software.exe> /? ” or ” <software.exe> /help ” or ” <software.exe> -help “.

Unattended installation[edit]

Installation that is performed without user interaction during its progress or with no user present at all. One of the reasons to use this approach is to automate the installation of a large number of systems. An unattended installation either does not require the user to supply anything or has received all necessary input prior to the start of installation. Such input may be in the form of command line switches or an answer file, a file that contains all the necessary parameters. Windows XP and most Linux distributions are examples of operating systems that can be installed with an answer file. In unattended installation, it is assumed that there is no user to help mitigate errors. For instance, if the installation medium was faulty, the installer should fail the installation, as there is no user to fix the fault or replace the medium. Unattended installers may record errors in a computer log for later review.

Headless installation[edit]

Installation performed without using a computer monitor connected. In attended forms of headless installation, another machine connects to the target machine (for instance, via a local area network) and takes over the display output. Since a headless installation does not need a user at the location of the target computer, unattended headless installers may be used to install a program on multiple machines at the same time.

Scheduled or automated installation[edit]

An installation process that runs on a preset time or when a predefined condition transpires, as opposed to an installation process that starts explicitly on a user’s command. For instance, a system administrator willing to install a later version of a computer program that is being used can schedule that installation to occur when that program is not running. An operating system may automatically install a device driver for a device that the user connects. (See plug and play.) Malware may also be installed automatically. For example, the infamous Conficker was installed when the user plugged an infected device to their computer.

Clean installation[edit]

A clean installation is one that is done in the absence of any interfering elements such as old versions of the computer program being installed or leftovers from a previous installation. In particular, the clean installation of an operating system is an installation in which the target disk partition is erased before installation. Since the interfering elements are absent, a clean installation may succeed where an unclean installation may fail or may take significantly longer.

Network installation[edit]

Not to be confused with network booting.

Network installation, shortened netinstall, is an installation of a program from a shared network resource that may be done by installing a minimal system before proceeding to download further packages over the network. This may simply be a copy of the original media but software publishers which offer site licenses for institutional customers may provide a version intended for installation over a network.Installer[edit]

3dp chip la gi - 3dp chip la gi

An installation program or installer is a computer program that installs files, such as applications, drivers, or other software, onto a computer. Some installers are specifically made to install the files they contain; other installers are general-purpose and work by reading the contents of the software package to be installed.

They exist both as “standalone installer” and “web installer”, where the former allows for offline installation as it contains all installation files, whereas the latter needs to download files necessary for installation from the web at the time of installation.

The differences between a package management system and an installer are:This box: 

  • view
  • talk
  • edit
Criterion Package manager Installer
Shipped with Usually, the operating system Each computer program
Location of installation information One central installation database It is entirely at the discretion of the installer. It could be a file within the app’s folder, or among the operating system’s files and folders. At best, they may register themselves with an uninstallers list without exposing installation information.
Scope of maintenance Potentially all packages on the system Only the product with which it was bundled
Developed by One package manager vendor Multiple installer vendors
Package format A handful of well-known formats There could be as many formats as the number of apps
Package format compatibility Can be consumed as long as the package manager supports it. Either newer versions of the package manager keep supporting it or the user does not upgrade the package manager. The installer is always compatible with its archive format, if it uses any. However, installers, like all computer programs, may be affected by software rot.

Bootstrapper[edit]

During an installation of a computer program, it is sometimes necessary to update the installer or package manager itself. To make this possible, a technique called bootstrapping is used. The common pattern for this is to use small executable files which update the installer and starts the real installation after the update. This small executable is called bootstrapper. Sometimes the bootstrapper installs other prerequisites for the software during the bootstrapping process too.

Common types[edit]

Main article: List of installation software

Cross-platform installer builders produce installers that run on Windows, macOS and Linux. An example is InstallAnywhere by Flexera Software.

Windows NT family includes an installation API and an associated service called Windows Installer. Microsoft provides a minimum level of tools required to create installers using Windows Installer in the freely available Windows SDK, instead focusing on the API to allow developers and third parties to leverage it in creating custom installers. Third party tools may supporting create installers using this API to speed the process. Examples include InstallShield (Flexera Software) and WiX (Outercurve Foundation). Installation authoring tools that do not rely on Windows Installer include Wise Installation Studio (Wise Solutions, Inc.), Installer VISE (MindVision Software), Visual Installer (SamLogic), NSIS, Clickteam, InnoSetup and InstallSimple.

macOS includes Installer, a native package manager. macOS also includes a separate software updating application, Software Update but only supports Apple and system software. Included in the dock as of 10.6.6, the Mac App Store shares many attributes with the successful App Store for iOS devices, such as a similar app approval process, the use of Apple ID for purchases, and automatic installation and updating. Although this is Apple’s preferred delivery method for macOS,[3] previously purchased licenses can not be transferred to the Mac App Store for downloading or automatic updating. Commercial applications for macOS may also use a third-party installer, such as Mac version of Installer VISE (MindVision Software) or InstallerMaker (StuffIt).

System installer[edit]

system installer is the software that is used to set up and install an operating system onto a device. Examples of system installers on Linux are Ubiquity and Wubi for Ubuntu, Anaconda for CentOS and Fedora, Debian-Installer for Debian-based versions of Linux, and YaST for SUSE-based projects. Another example is found in the Haiku operating system, which uses a utility called Haiku Installer to install itself onto a device after booting from a live CD or live USB.See also[edit]

  • Application streaming
  • Application virtualization
  • Pre-installed software
  • Self-extractable archive
  • Software distribution
  • Uninstaller

References[edit]

  1. ^ Jump up to: a b Hoffman, Chris (27 July 2013). “How to Avoid Installing Junk Programs When Downloading Free Software”. HowToGeek. Retrieved 6 October 2015.
  2. ^ Mathews, Lee (22 August 2011). “Download.com wraps downloads in bloatware, lies about motivations”. ExtremeTech. Retrieved 6 October2015.
  3. ^ “macOS – What is macOS”. Apple. Retrieved 5 April 2018.

Categories:

  • Installation software
  • Package management systems

Hidden categories:

  • Articles with short description
  • Short description matches Wikidata
  • Articles needing additional references from April 2010
  • All articles needing additional references
  • Wikipedia articles needing rewrite from May 2009
  • All articles needing rewrite
  • Wikipedia articles in need of updating from February 2016
  • All Wikipedia articles in need of updating
  • Articles with multiple maintenance issues
  • All articles with unsourced statements
  • Articles with unsourced statements from April 2013
  • Articles with GND identifiers
  • Articles with MA identifiers

Software configuration management

From Wikipedia, the free encyclopediaJump to navigation Jump to search

3dp chip la gi - 3dp chip la gi

Not to be confused with Version Control System.

IEEE software life cycle
SQA – Software quality assurance • IEEE 730SCM – Software configuration management • IEEE 828STD – Software test documentation • IEEE 29119SRS – Software requirements specification • IEEE 29148V&V – Software verification and validation • IEEE 1012SDD – Software design description • IEEE 1016SPM – Software project management • IEEE 16326SUD – Software user documentation • IEEE 24748SRA – Software reviews and audit • IEEE 1028

In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management.[1] SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine what was changed and who changed it. If a configuration is working well, SCM can determine how to replicate it across many hosts.

The acronym “SCM” is also expanded as source configuration management process and software change and configuration management.[2] However, “configuration” is generally understood to cover changes typically made by a system administrator.Purposes[edit]

3dp chip la gi - 3dp chip la gi

The goals of SCM are generally:[citation needed]

  • Configuration identification – Identifying configurations, configuration items and baselines.
  • Контрол на конфигурацията – Внедряване на контролиран процес на промяна. Това обикновено се постига чрез създаване на табло за контрол на промените, чиято основна функция е да одобрява или отхвърля всички заявки за промяна, които се изпращат срещу всяка изходна линия.
  • Báo cáo trạng thái cấu hình – Ghi lại và báo cáo tất cả các thông tin cần thiết về trạng thái của quá trình phát triển.
  • Kiểm tra cấu hình – Đảm bảo rằng cấu hình bao gồm tất cả các bộ phận dự kiến ​​và phù hợp với các tài liệu đặc điểm kỹ thuật, bao gồm các yêu cầu, thông số kỹ thuật kiến ​​trúc và hướng dẫn sử dụng.
  • Quản lý bản dựng – Quản lý quy trình và công cụ được sử dụng cho các bản dựng.
  • Quản lý quá trình – Đảm bảo cam kết đối với quá trình phát triển của tổ chức.
  • Quản lý Môi trường – Quản lý phần mềm và phần cứng lưu trữ hệ thống.
  • Làm việc theo nhóm – Tạo điều kiện thuận lợi cho các tương tác nhóm liên quan đến quy trình.
  • Truy tìm lỗi – Đảm bảo mọi lỗi đều có thể truy xuất trở lại nguồn.

Với sự ra đời của điện toán đám mây và DevOps, các mục tiêu của các công cụ SCM đã hội tụ trong một số trường hợp. Bản thân các công cụ SCM đã trở thành thiết bị ảo có thể được khởi tạo như máy ảo và được lưu theo trạng thái và phiên bản. Các công cụ này có thể lập mô hình và quản lý các tài nguyên ảo dựa trên đám mây, bao gồm các thiết bị ảo, khối lượng lưu trữ và gói phần mềm. Vai trò và trách nhiệm của những người tham gia cũng đã được hợp nhất và các nhà phát triển hiện có thể tạo động các máy chủ ảo và các tài nguyên liên quan.    [3]    Ngày [sửa]

Lịch sử của quản lý cấu hình phần mềm (SCM) trong máy tính có thể bắt nguồn từ những năm 1950 khi CM (để quản lý cấu hình) ban đầu được áp dụng cho phát triển phần mềm để phát triển phần cứng và kiểm soát sản xuất. Phần mềm ban đầu có một đường dẫn vật lý như bản đồ, băng và các phương tiện khác. Quản lý cấu hình phần mềm ban đầu là một quy trình thủ công. Với những tiến bộ về ngôn ngữ và độ phức tạp, kỹ thuật phần mềm, bao gồm quản lý cấu hình và các phương pháp khác, đã trở thành một vấn đề lớn do các vấn đề như thời gian, ngân sách và chất lượng. Các bài học thực tế trong nhiều năm đã dẫn đến việc định nghĩa và tạo ra các thủ tục và công cụ. Cuối cùng, các công cụ này đã trở thành hệ thống quản lý thay đổi phần mềm.    [4] Các ứng dụng trong toàn ngành, cả mở và nội bộ (như một hệ thống kiểm soát kiểm toán) đã được đề xuất như một giải pháp. Với việc sử dụng ngày càng nhiều máy tính, các hệ thống đã xuất hiện để giải quyết một loạt các yêu cầu bao gồm quản lý yêu cầu, lựa chọn thay thế thiết kế, kiểm soát chất lượng và hơn thế nữa; các công cụ sau này tuân theo hướng dẫn của các tổ chức như mô hình phát triển năng lực của Viện Kỹ thuật Phần mềm. Xem thêm

  • Quản lý vòng đời ứng dụng
  • So sánh phần mềm quản lý cấu hình mã nguồn mở
  • So sánh phần mềm kiểm soát phiên bản
  • Tự động hóa cấu hình liên tục
  • Danh sách phần mềm kiểm soát kiểm toán
  • Cơ sở hạ tầng như mã

Tài liệu tham khảo [sửa]

  1. ^     Roger C. Pressman (2009). Kỹ thuật phần mềm: Phương pháp tiếp cận của học viên     (Ấn bản quốc tế lần thứ 7). New York: McGraw-Hill.
  2. ^     Gartner and Forrester Research
  3. ^     Amys, A; bàn đạp S; PanTM; Zou PX (ngày 5 tháng 6 năm 2012). “Phát triển các ứng dụng đám mây với các công cụ Rational.” IBM DeveloperWorks    . IBM.
  4. ^     “1988” Hướng dẫn Hiểu về Quản lý Cấu hình trong Hệ thống Tin cậy “Hệ thống Bảo mật Máy tính Quốc gia (thông qua Google)

Đọc thêm [sửa]

  • 828-2012 Tiêu chuẩn IEEE về Quản lý Cấu hình trong Hệ thống và Kỹ thuật Phần mềm    . 2012. doi: 10.1109 / IEEESTD.2012.6170935. ISBN 978-0-7381-7232-3.
  • Ayelo, R. (2010). Các phương pháp hay nhất về quản lý cấu hình: các phương pháp thực tế hoạt động trong thế giới thực     (ấn bản đầu tiên). Addison-Wesley. ISBN 0-321-68586-5.
  • Babic, WA (1986). Quản lý cấu hình phần mềm, điều phối năng suất nhóm    . Phiên bản đầu tiên. Boston: Addison-Wesley
  • Berchuk, Appleton; (2003). Các mô hình quản lý cấu hình phần mềm: Làm việc nhóm hiệu quả, tích hợp thực tế     (ấn bản lần 1). Addison-Wesley. ISBN 0-201-74117-2.
  • Bersoff, EH (1997). Các yếu tố của quản lý cấu hình phần mềm. IEEE Computer Society Press, Los Alamitos, CA,     1-32
  • Denis, A., Wixom, BH & Tegarden, D. (2002). Phân tích và thiết kế hệ thống: Phương pháp tiếp cận hướng đối tượng với UML.  Hoboken, New York: John Wiley & Sons, Inc.
  • Bộ Quốc phòng, Hoa Kỳ (2001). Sách hướng dẫn quân sự: Hướng dẫn quản lý cấu hình (Rev. A) (MIL-HDBK-61A)    . Truy cập ngày 5 tháng 1 năm 2010 từ http://www.everyspec.com/MIL-HDBK/MIL-HDBK-0001-0099/MIL-HDBK-61_11531/.
  • Futrell, RT     và cộng sự.  (Năm 2002). Phần mềm quản lý dự án chất lượng.  Phiên bản đầu tiên. Sảnh Prentis.
  • Tổ chức tiêu chuẩn hóa quốc tế (2003). ISO 10007: Hệ thống quản lý chất lượng – Hướng dẫn quản lý cấu hình    .
  • Saeki M. (2003). Nhúng các thước đo trong các phương pháp phát triển hệ thống thông tin: ứng dụng các kỹ thuật công nghệ.     CAiSE 2003, 374–389.
  • Scott, JA & Nisse, D. (2001). Quản lý cấu hình phần mềm. Trong:     Hướng dẫn  về Bộ Kiến thức Kỹ thuật Phần mềm    . Truy cập ngày 5 tháng 1 năm 2010 từ http://www.computer.org/portal/web/swebok/htmlformat.
  • Paul M. Duvall, Steve Matthias và Andrew Glover (2007). Tích hợp liên tục: Cải thiện chất lượng phần mềm và giảm thiểu rủi ro    . (Tái bản lần thứ nhất). Addison-Wesley Chuyên nghiệp. ISBN 0-321-33638-0.

Liên kết bên ngoài [sửa]

  • SCM và ISO 9001, Robert Bamford và William Daibler, SSQC
  • Các trường hợp sử dụng và thực hiện quản lý vòng đời ứng dụng
  • Chiến lược phát triển song song để quản lý cấu hình phần mềm
showvte Kỹ thuật phần mềm
tiêu chuẩn showvteIEEE

Thể loại:

  • Quản lý cấu hình
  • Kỹ thuật phần mềm
  • Tiêu chuẩn IEEE
  • Các loại công cụ được sử dụng trong phát triển phần mềm

Các danh mục ẩn:

  • Các bài báo có mô tả ngắn
  • Mô tả ngắn gọn khác với Wikidata
  • Các bài báo không được trích dẫn trong văn bản tháng 9 năm 2010
  • Tất cả các bài viết không được trích dẫn trong văn bản
  • Các bài viết cần được làm sạch kể từ tháng 6 năm 2014
  • Tất cả các trang cần được làm sạch
  • Các bài viết có chương được chuyển thành văn xuôi từ tháng 6 năm 2014
  • Tất cả các bài báo có chứa các tuyên bố không có nguồn gốc
  • Các bài báo không có nguồn gốc từ tháng 3 năm 2008


Video 3dp chip la gi

Cảm ơn các bạn đã theo dõi bài viết 3dp chip la gi ! Starwarsvn.com hi vọng đã mang đến thông tin hữu ích cho bạn. Xem thêm các bài viết cùng danh mục Thủ thuật. Mọi ý kiến thắc mắc hãy comment bên dưới, chúng tôi sẽ phản hồi sớm nhất có thể. Nếu thấy hay hãy chia sẻ bài viết này cho nhiều người được biết. Starwarsvn.com chúc bạn ngày vui vẻ