We have a Github Discussions section now. When a problem is definitely a defect in the core, you will reduce the time taken to fix it if you create an issue, as I prioritize issues over catching up on discussions.
This corrects many of the bugs present in 2.6.x
2.6.x introduces improved flash footprint of serial significantly while adding features, wire can wake slave from sleep without corrupting data, and much, much more, see the Changelog.
Only versions of the Arduino IDE downloaded from arduino.cc should be used, NEVER from a linux package manager. The package managers often have the Arduino IDE - but have modified it. This is despite their knowing nothing about Arduino or embedded development in general, much less what they would need to know to modify it successfully Those version are notorious for subtle but serious issues caused by these unwise modifications. This core should not be expected to work on such versions, and no modifications will be made for the sake of fixing versions of the IDE that come from package managers for this reason.
This is a bug in the Arduino client.
IDE versions between 1.8.13 and 2.x developed significant novel defects. IDE versions 1.8.2 and earlier , however, possess crippling unfixed defects. I believe that they finally have a working version of the IDE out, and I believe that latest is able to install my core correctly.
Prior to megaTinyCore 2.6.0, manual installation of megaTinyCore would cause V1.8.14 of the IDE to crash due to this bug when you install the core manually in your arduino folder. Users of 1.8.14 and later must use version 2.6.0 of megaTinyCore.
I buy a lot of electronics stuff on AliExpress. It's a great marketplace for things that are made by Chinese companies and are mostly generic, including loads of components unavailable to individuals in the global West any other way (ex, min order is a reel or something like that - if you can even find a component vendor that works with the no-name chinese chip maker). It is not a great place for the latest semiconductor product lines from major Western manufacturers, especially in the midst of a historic shortage of said chips. The modern AVR devices, when they are available through those channels at all, are frequently reported to be fake or defective (like ATtiny412s that think they're 416s and may not correctly execute power on reset). For that matter, you probably don't want to buy any AVR microcontrollers on AliExpress... Assembled boards, like Arduino Nano clones, generally work if you avoid the ones with the third party LGT8 chips and watch out for the ones with the ATmega168p instead of the '328p - but there are a lot of reports of bogus microcontrollers when they're sold as bare chips (I have heard of fake ATtiny85s that were actually remarked ATtiny13s; it's not just modern AVRs that get faked). There are a lot of interesting theories for where those bogus chips came from, and Microchip has remained totally silent on the issue.
This document is best viewed online (as opposed to opening the markdown file in your favorite text editor), so that links are clickable and inline images are shown, and probably most importantly, to make tables render correctly sometimes. Again, this [document can be found on github](https://github.com/SpenceKonde/megaTinyCore](https://github.com/SpenceKonde/megaTinyCore)
Older versions do not properly handle the programmers in the tools -> programmers menu, which degrades the UX rapidly as the number of installed cores increases. They are not suitable. The newest versions starting with 1.8.14 (including 1.8.17, 1.8.18, and 1.8.19) may generate a "panic: no major version found" error because they fail to properly parse platform.txt. Since 2.6.0 we have been manually modifying the platform.txt directly before release, so this is less of an issue
When megaTinyCore is installed through board manager, the required version of the toolchain is installed automatically. All 0/1/2-Series parts are supported with no extra steps. Up until 2.2.7, we used Arduino7 version of avr-gcc (gcc 7.3.0 and avrlibc 3.6.1) with latest ATpacks as of june 2020. Starting with 2.2.7, we began using my Azduino build of the toolchain, which has updated ATpacks for all the newly supported parts. 2.2.7 used Azduino3, 2.3.0+ used Azduino4, and starting with 2.6.0, we use Azduino5 (though it offers no benefit for us, other than saving a quarter GB of HDD space and 40mb of download bandwidth if you install both megaTinyCore and DxCore through board manager.
Manual installation is more complicated - particularly if you want support for the 2-Series; see the Installation guide for more information.
An Arduino core for the tinyAVR 0-Series, 1-Series, and now the 2-Series. These parts have an improved architecture compared to the "classic" tinyAVR parts (which are supported by ATTinyCore), with improved peripherals and improved execution time for certain instructions (these are similar in both regards to the advanced AVR Dx-Series, as well as megaAVR 0-Series chips like the ATmega4809 as used on the official Nano Every and Uno Wifi Rev. 2 - although the Arduino team has done their best to kneecap them) in the low-cost, small packages typical of the ATtiny line. All of these parts feature at least one hardware UART, and an SPI and TWI interface (none of that USI garbage like, for example, the ATtiny85 has), a powerful event system, configurable custom logic, at least one on-chip analog comparator, a surprisingly accurate internal oscillator, and in the case of the 1-Series, an actual DAC output channel, and in the case of the 2-Series, a fancy differential ADC.
Moreover, the tinyAVR 0/1/2-Series parts are cheap - the highest end parts, the 3226 and 3227, with 32k of flash and 3k of SRAM (vs the 2k SRAM as the ATmega328p used in Uno/Nano/ProMini) run just over $1 USD in quantity - less than many 8k classic AVR ATtiny parts ("AVR instruction set, at a PIC price"). All of these parts are rated to run at 16 MHz or 20 MHz (at 4.5-5.5v) without an external crystal, and the internal oscillator is accurate enough for UART communication.
These use UPDI programming, not traditional ISP like the classic ATtiny parts did. See below for more information. Getting a UPDI programmer is simple - you can use a classic 328p-based Arduino as programmer using jtag2updi - or for better results with cheaper hardware, you can use any USB-serial adapter and a resistor (and preferably a diode) using the included SerialUPDI tool, or you can use AVRdude with one of the Microchip programmers (the mEDBG/nEDBG/EDBG-based programmers on their development board, Atmel-ICE or SNAP) or any UPDI programming tool that emulates one of those (which, to my knowledge, all of them do - if there is one that avrdude supports and that my core doesn't, please open an issue to let me know!).
A serial bootloader, Optiboot_x (based on the same codebase as the classical Arduino Uno bootloader, though since greatly altered) is supported on these parts (0/1-Series support is currently live, 2-Series is expected by the first week of May; adjustments for the new parts are trivial), allowing them to be programmed over a traditional serial port. See the Optiboot section below for more information on this, and the relevant options. Installing the bootloader does require a UPDI programmer. The assembled breakout boards I sell on Tindie are available pre-bootloaded (they are bootloaded on demand). That being said the user experience with Optiboot is a little disappointing on the 0/1-Series parts as well as the 14-pin 2-Series parts, due to their lack of a hardware reset pin that could be used with the usual autoreset circuit to automatically reset into the bootloader when the serial port is opened. You need to either disable UPDI programming entirely (requiring an HV programmer if fuse settings or bootloader need to be change after initial bootloading) or leave UPDI enabled, but start any upload within 8 seconds of applying power. The 20-pin and 24-pin 2-Series parts support an "alternate reset pin" allowing these to act more like a traditional Arduino.
The UPDI programming interface is a single-wire interface for programming (and debugging - Universal Programming and Debugging Interface), which is used used on the tinyAVR 0/1/2-Series, as well as all other modern AVR microcontrollers. While one can always purchase a purpose-made UPDI programmer from Microchip, this is not recommended when you will be using the Arduino IDE rather than Microchip's (godawful complicated) IDE. There are widespread reports of problems on Linux for the official Microchip programmers. There are two very low-cost alternative approaches to creating a UPDI programmer, both of which the Arduino community has more experience with than those official programmers.
Before megaTinyCore existed, there was a tool called pyupdi - a simple Python program for uploading to UPDI-equipped microcontrollers using a serial adapter modified by the addition of a single resistor. But pyupdi was not readily usable from the Arduino IDE, and so this was not an option. As of 2.2.0, megaTinyCore brings in a portable Python implementation, which opens a great many doors; Originally we were planning to adapt pyupdi, but at the urging of its author and several Microchip employees, we have instead based this functionality on pymcuprog, a "more robust" tool developed and "maintained by Microchip" which includes the same serial-port upload feature, only without the performance optimizations. If installing manually you must add the Python package appropriate to your operating system in order to use this upload method (a system Python installation is not sufficient, nor is one necessary).
Read the SerialUPDI documentation for information on the wiring.
As of 2.3.2, with the dramatic improvements in performance, and the proven reliability of the wiring scheme using a diode instead of a resistor, and in light of the flakiness of the jtag2updi firmware, this is now the recommended programming method. As of this version, programming speed has been increased by as much as a factor of 20, and now far exceeds what was possible with jtag2updi (programming via jtag2updi is roughly comparable in speed to programming via SerialUPDI on the "SLOW" speed option, 57600 baud; the normal 230400 baud version programs about three times faster than the SLOW version or jtag2updi, while the "TURBO" option (runs at 460800 baud and increases upload speed by approximately 50% over the normal one. The TURBO speed version should only be used with devices running at 4.5v or more, as we have to run the UPDI clock faster to keep up (it is also not expected to be compatible with all serial adapters - this is an intentional tradeoff for improved performance), but it allows for upload and verification of a 32kB sketch in 4 seconds.
Three designs are being iterated: A dual port serial adapter where both are serial ports, a dual port serial adapter where one port is always UPDI, and and a single port one witch a switch to select the mode, and an optional addon board to give leds indicating status of modem control lines.
These will allow use of either a SMT JST-XH connector or dupont connector - either way with 6 pins for serial (FTDI pinout as marked) and 3 pins (for UPDI).
All three of these will be able to supply 3.3 or Vusb (nom. 5V), or disconnect both Vusb and 3V3 from the power, and expect that the target device is powered with 5.5V > Vdd > 1.8V. The logic levels used in this case will be the voltage of whatever is applied. Be warned that on dual serial devices, the VccIO power rail is shared! They must both be running at the same voltage, be the same device, or the adapter must be set to supply them and their power disconnected.
Depending on adapter model, and operating system, it has been found that different timing settings are required; however, settings needed to keep even 230400 baud from failing on Linux/Mac with most adapters impose a much larger time penalty on Windows, where the OS's serial handling is slow enough that nothing needs that delay...
The "write delay" mentioned here is to allow for the page erase-write command to finish executing; this takes a non-zero time. Depending on the adapter, USB latency and the implicit 2 or 3 byte buffer (it's like a USART, and probably implemented as one internally. The third byte that arrives has nowhere to go, because the hardware buffer is only 2 bytes deep) may be enough to allow it to work without an explicit delay. Or, it may fail partway through and report an "Error with st". The faster the adapter's latency timeout, and the faster the OS's serial handling is, the greater the chance of this being a problem. This is controlled by the -wd
command line parameter if executing prog.py manually. As of 2.5.6 this write delay is closer to the actual time requested (in ms), previously it had a granularity of several ms, when 1 is all you needed, and as a result, the penalty it imposed was brutal, particularly on Windows.
Selection guide:
460800+ baud requires the target to be running at 4.5V+ to remain in spec (in practice, it probably doesn't need to be quite that high - but it must be a voltage high enough to be stable at 16 MHz. We set the interface clock to the maximum for all speeds above 230400 baud - while a few adapters sometimes work at 460800 without this step (which in and of itself is strange - 460800 baud is 460800 baud right?), most do not and SerialUPDI doesn't have a way of determining what the adapter is.
CH340-based adapters have high-enough latency on most platforms, and almost always work at any speed without resorting to write delay. All options work without using the write delay.
Almost all adapters work on Windows at 230.4k without using the write delay. A rare few do not, including some native USB microcontrollers programmed to act as serial adapters (ex: SAMD11C).
Almost nothing except the CH340-based adapters will work at 460.8k or more without the write delay, regardless of platform.
On Windows, many adapters (even ones that really should support it) will be unsuccessful switching to 921600 baud. I do not know why. The symptom is a pause at the start of a few seconds as it tries, followed by uploading at 115200 baud. The only one I have had success with so far is the CH340, oddly enough.
460800 baud on Windows with the write delay is often slower than 230400 baud without it. The same is not true on Linux/Mac, and the smaller the page size, the larger the performance hit from write delay.
57600 baud should be used if other options are not working, or when programming at Vcc = < 2.7V.
460800 baud works without the write delay on some adapters with a 10k resistor placed across the Schottky diode between TX and RX, when it doesn't work without that unless the write delay is enabled. No, I do not understand how this could be either!
As you can see from the above, this information is largely empirical; it is not yet known how to predict the behavior.
FTDI adapters (FT232, FT2232, and FT4232 etc), including the fake ones that are available on eBay/AliExpress for around $2, on Windows default to an excruciatingly long latency period of 16ms. Even with the lengths we go to in order to limit the number of latency delay periods we must wait through, this will prolong a 2.2 second upload to over 15 seconds. You must change this in order to get tolerable upload speeds:
Open control panel, device manager.
Expand Ports (COM and LPT)
Right click the port and choose properties
Click the Port Settings tab
Click "Advanced..." to open the advanced settings window.
Under the "BM Options" section, find the "Latency Timer" menu, which will likely be set to 16. Change this to 1.
Click OK to exit the advanced options window, and again to exit properties. You will see device manager refresh the list of hardware.
Uploads should be much faster now.
One can be made from a classic AVR Uno/Nano/Pro Mini; inexpensive Nano clones are the usual choice, being cheap enough that one can be wired up and then left like that. We no longer provide detailed documentation for this processes; jtag2updi is deprecated. If you are still using it, you should select jtag2updi from the tools->programmer menu. This was previously our recommended option. Due to persistent jtag2updi bugs, and its reliance on the largely unmaintained 'avrdude' tool (which among other things inserts a spurious error message into all UPDI uploads made with it), this is no longer recommended.
Apparently Arduino isn't packaging 32-bit versions of the latest avrdude. I defined a new tool definition which is a copy of arduino18 (the latest) except that it pulls in version 17 instead on 32-bit Linux, since that's the best that's available for that platform. The arduino17 version does not correctly support uploading with some of the Microchip programming tools.
This is currently used only for the last few releases, and should fix the avrdude not available for this platform error.
tinyAVR 2-Series
ATtiny3227,1627,827,427
ATtiny3226,1626,826,426
ATtiny3224,1624,824,424
tinyAVR 1-Series
ATtiny3217,1617,817,417
ATtiny3216,1616,816,416
ATtiny1614,814,414,214
ATtiny412,212
tinyAVR 0-Series
ATtiny1607,807
ATtiny1606,806,406
ATtiny1604,804,404,204
ATtiny402,202
Anything named like "AVR##XX##" where X is a letter and # is a number - you want my DxCore for those
All of the classic (pre-2016) tinyAVR parts - these are almost all supported by one of my other cores ATTinyCore
ATtiny 25/45/85, 24/44/84, 261/461/861, 48/88, the two small and ones (strange 43 and 4313/2313), and in 2.0.0, the 26 as well as the final-four (which show hints of experimentation in the direction of the modern AVRs), the ATtiny 441/841, 1634 and 828 plus the even stranger 26.
Anything else See this document for a list of AVR part families and what arduino cores they work with - almost everything has a core that offers support, usually by myself or MCUdude.
See this document covering all modern AVRs
Feature | 0-series | 1-series | 1+series | 2-series |
---|---|---|---|---|
Flash | 2k-16k | 2k-8k | 16k/32k | 4k-32k |
Pincount | 8-24 | 8-24 | 14-24 | 14-24 |
SRAM | 128b-1k | 128b-512b | 2k | 512b-3k |
TCD | No | Yes | Yes | No |
TCB | 1 | 1 | 2 | 2 |
ADC | 1x10bit | 1x10-bit | 2x10-bit | 1x12-bit w/PGA |
VREF pin | No | No | Yes | Yes |
AC | 1 | 1 | 3 | 1 |
Event * | 3 chan | 6 chan | 6 chan | 6 chan |
CCL ** | 2 LUT | 2 LUT | 2 LUT | 4 LUT |
*
Event channels, except on the 2-series tinyAVRs (and all non-tiny modern AVRs) are subdivided into two types - synchronous (to the system clock) and asynchronous. Not all generators can be used with a synchronous channel, and some event users can only use the synchronous channels, and the channel lists are less consistent and more . This madness was abandoned at the first opportunity - even the mega0 had done away with that distinction.
**
only 2-series and non-tiny parts can fire an interrupt based on CCL state.
All parts have analog input available on most pins (all pins on PORTA and PORTB 0-1, and 4-5). The second ADC on the 1-series+ can use the pins on PORTC as inputs as well (see the analog reference for information about using these).
These are the budget options. Though they are supported, they are not recommended. These never get the "boost" that the tinyAVR 1-series gets at 16k, have no second TCB in any configuration, no TCD, only 3 event channels, none of which can carry RTC event output. These parts have 2 CCL LUTs like the 1-series, and are available with up to 16k of flash in 14, 20, and 24-pin configurations (only 4k for 8-pin parts), and up to 1k SRAM.
These have 2k, 4k or 8k of flash and 128, 256, or 512b of ram, just like the 0-series. They do not have the second ADC, the triple AC configurationth or the second TCB, though they do have the TCD.
All of a sudden, at 16k, the 1-series parts become far more interesting. Accompanying the larger flash is an arsenal of peripherals that seems fit for a much larger chip, and whether 16k or 32k, they all get 2k of SRAM. The whole second ADC is unique among AVRs. It seems to have been the testing ground for many features that showed up in a refined form on the AVR Dx-series. The pricing does not appear to account for the vastly superior peripherals on the 16k 1-series,
As you can see from the table above, the 2-series is almost more of a sidegrade than an upgrade. They have a much better ADC, the event system and CCLs are "normal", and they have more RAM, the 14-pin part is available with 32k of flash (a 3214 was apparently planned, but then canceled; it got far enough to be in the ATPACK for a while before being removed)
I've written a brief summary of when you would want to use which series, if the right choice isn't obvious by now.
In the official Arduino board definition for their "megaavr" hardware package, they imply that the new architecture on the megaAVR 0-Series parts (which is nearly the same as used on the tinyAVR 0-Series and 1-Series) is called "megaavr" - that is not an official term. Microchip uses the term "megaAVR" to refer to any "ATmega" part, whether it has the old style or modern peripherals. There are no official terms to refer to all AVR parts of one family or the other, and a Microchip employee even denied that there was such a term internally. I'm not sure how you can manufacture two sets of parts, with the parts in each set having so much in common with each other and so little in common with the other set, with nobody coining a phrase to refer to either of them.
In this document, prior to 2.0.2, we used the Arduino convention, and despite well over a year having passed since then, I still keep finding places where I call them megaAVR. Please report this using a github issue if you see any. Do note that the terms avr
and megaavr
are still used internally (for example, in libraries, to mark which parts a given library is compatible with, or separate different versions of a file based on what they will run on). This will continue - we have to stick with this for compatibility with what the Arduino team started with the core for the Uno WiFi Rev. 2 and Nano Every.
In any event, some word is needed to refer to the two groups and Microchip hasn't provided one. In the absence of an official term, I have been referring the pre-2016 AVR devices (with PORTx, DDRx, etc registers for pins) as "classic AVR" and the ones Arduino calls megaavr as "modern AVR". There also exist some parts whose I/O modules are largely more like classic AVRs but which also have a significantly worse version of the instruction set, and typical flash sizes of 1k or less. These use the AVRrc (for reduced core) variant of AVR, whereas most classic AVRs use AVRe or AVRe+, and modern AVRs use AVRxt. The AVRrc parts are not supported by this core, and on the unfortunate occasion that I need to discuss these profoundly disappointing parts, I will refer to them as "Reduced Core AVR" parts, as that is their official name, even though I have much more colorful phrases for them. It is recommended that no design use a Reduced Core AVR, period. Not that they're obsolete, they're just lousy. It is recommended that "modern AVRs" (those with the new peripherals and AVRxt instruction set) - either Ex-series, Dx-series, tinyAVR 0/1/2 or mega0 be used for all new designs
Datasheet for the new tinyAVR 2-Series - While the datasheet only "covers" the 16k parts, they clearly state that there are no differences in features between parts with the same pin count (that is, there are no "golden" parts like the 16k/32k 1-Series), only between parts with different pin counts, and only as dictated by the pincount (that is, a feature on the 24 pin part will be on the 14-pin one, unless the 14-pin one doesn't have the pins that it needs, and it's something that can't be used without pins). 14, 20, and 24 pin parts are all listed with 4k, 8k, 16k and 32k flash; these flash size options, respectively, come with 512, 1024, 2048, and 3072 bytes of SRAM (that is, the 4k and 8k parts have double the SRAM), 4/8k parts get 128 bytes of EEPROM, the larger ones get 256. 14-pin parts come in SOIC and TSSOP, 20-pin in (wide) SOIC, SSOP, and that itty-bitty QFN like the 1616 (this time they gave us the 32k part in that package too, but good luck getting one, it's backordered everywhere - I couldn't score a single one) and 24-pin in the same VQFN as the 3217.
TWI, SPI, USART0, AC0, are unchanged, as is NVMCTRL (the changes required to the bootloader were solely in relation to supporting the second USART). Clock options unchanged. TCB0 and TCB1 got upgraded to the version in the Dx-Series: clock off event option, cascade, and separate INTCTRL bits for OVF and CAPT - nice additions, but nothing relevant to the core itself), and all the parts have both TCB's. We now get 4 CCL LUTs and 2 sequencers, instead of 2 and 1 - and they can fire interrupts like other parts with CCL (and unlike the tinyAVR 0/1-Series). One of the most exciting features is that, as expected, they have a second USART (that noise you hear is the ATtiny841 and and ATtiny1634 sobbing in the corner). PORTMUX registers now named like the rest of the modern AVRs - but we didn't lose the individual control over the pins for each TCA WO channel. EVSYS now works like it does on non-tinyAVR-0/1-Series parts (which is a welcome change - the 0/1-Series was the odd-one-out, and some of the ways in which their EVSYS was different sucked). The 1-Series features of TCD0, AC1/2, DAC0, and ADC1 are gone. In their stead, ADC0 is much fancier and almost unrecognizable, the first new AVR released since the buyout that featured a real differential ADC. (queue another agonized wail from the poor '841, which also has an incredibly fancy ADC with great differential options, but which looks thoroughly dated next to the new ones)... judging by the volume of posts on different topics that I've seem, I have a sense that differential ADC wasn't at the top of most of your wish-lists - but it was on the top of the major chip customers' lists, and so that's what we're getting. And it was nigh time we got a proper differential ADC instead of the one on the Dx-series. And it is really really fancy. See below.
megaTinyCore provides an analogRead() implementation, and more powerful functions to use the oversampling and PGA (see the analog feature section below).
Oh, and one more thing... the UPDI pin configuration has the old options - UPDI, I/O, or Reset... and a new one: UPDI on PA0, with hardware RESET pin on PB4! Optiboot will finally be a viable and comfortable option at least on the parts that have a PB4, ie, not the 14-pin parts. Which also happen to be (if my Tindie store sales are any indication) the most popular kind.
Do you think there will be a 3 series? I do not. DD and the EA's are clearly coming after them and taking up strategic positions around tinyAVR territory. I think it's only a matter of time before the brand is eliminated like they did megaAVR after the megaAVR 0-series. This is not necessarily a bad thing: All the Dx and EA series parts are very similar in pin mappings and and behavior, which is very nice. The tinies are less systematic, though they distribute pins to more peripherals. The guiding principle seems to have been "no peripheral left behind". Contrast with the pin mappings of Dx and EA-series where everything follows a fixed master plan. Parts either have or don't have a given pin, and if they don't, they don't have that function available. On both broad groups, I think there's a product manager whose job it is to crack a whip at engineers thinking of making an "exception" to the Holy Pinout (since those exceptions inevitably proliferate and are how we wound up with the blindfolded dartboard pin assignments on classic tinyAVR)
The pin numbering is weird on the tinyAVRs, and it's Microchip's fault - they numbered the pins within the ports strangely: It starts off in order, except that PA0 is UPDI and generally not usable, then the pins of PORTB are numbered in reverse order, then PORTC back to the same counterclockwise numbering as PORTA. Give me a break! Since tradition is to use pin 0 for the first pin, and have the last number be the pin that you can't use without setting a fuse that makes the chip hard to program. I'd have much rathered be able to number them counterclockwise starting with A0 without breaking unwritten conventions of Arduino code. One can argue that I made a poor decision on the pin mappings - perhaps they should have started with PA0 (unusable unless fuse set, in which case the chip is hard to program) as pin 0, then numbered the pins in counter clockwise. But you still couldn't do the sort of tricks you could if all the ports were in order, unless you numbered the PORTB pins backwards. If you were able to get rid of the expectation that all pins be numbered in order (and only use PIN_Pxn notation) significant savings could be realized
I predict, that in 2-4 years time, there's an AVR DA, DB, DD. DU (the USB one), EA, and D/E/F-series parts down to pincounts of 8 (or at least 14) and 64-pin parts with 128k flash and the new ADC. And nothing else branded ATtiny. Possibly the biggest question left is whether they're ever going to replace the ATmega2560 with a modern AVR with 100 total pins (probably 80-88 of which are I/O) and flash options up to 256k; That would present three issues - first, past 56 I/O pins there are no more VPORT registers left - the low I/O space is full with 28 VPORT and 4 GPIORs. How will they handle the 4 extra ports? (on the 2560, they were just second class ports that were accessed more slowly and didn't have single cycle access. I have some musings about it and the feasibility with how few opcodes are available in appendix A here. and second, to breach the 128k barrier in flash, you have to go to a 17-bit program counter. All jumps take an extra cycle and all returns take an extra cycle. Finally, if the AVR DB ram ratio was retained, this "DX part at 256k of flash would have 32k of ram. Now recall how progmem works on Dx - they couldn't go all the way to 32. 24k ram is definitely possible, maybe even 28, but at 32k, plus 32k for mapped flash, leaves no room for the SFRs, which are in the same address space. So it will be interesting to see how they handle that.
I sell breakout boards with regulator, UPDI header, and Serial header in my tindie shop, as well as the bare boards. Buying from my store helps support further development on the core, and is a great way to get started using these exciting new parts with Arduino. Currently ATtiny1624 boards are available, but the 20 and 24-pin parts will not be sold as an assembled board until a newly revised PCB design is back from the board house to enable autoreset on the alt-reset pin. There is also a 14-pin board revision coming - thought it is largely cosmetic. The yellow solder mask has got to go, as the readability seemed to get worse in the last several batches. The new boards also standardize a 0.6" spacing between the rows of pins, instead of the current 0.7" spacing, so you will be able to, for example, put machined pin header onto them and plug them into a wide-DIP socket, or use them with our prototyping board optimized for that row spacing. Assembled 0-Series boards are being discontinued, and will not be restocked once they sell out. The same will happen for the 16k 2-Series parts once the 32k ones are available.
The ADC on the 2-series and EA-series are the best ADCs that have been released on an AVR in the modern AVR era. Besides those two. the closest comparisons are the classic AVRs that got differential ADCs with top-notch features (the t841, mega2560 and (surprisingly) the t861 being the strongest competitors). While it isn't capable of the insane 100x and 200x gain that some parts bragged of in the classic AVR days, it was never clear to me how much of what was being amplified was simply noise (considering my admittedly limited experience playing with differential ADCs I'm going to say "probably most of it, and definitely most of it if you let me design the hardware, I don't know analog!"). This new ADC is certainly highly capable, with true differential capability (unlike the DA and DB series had), and one which rises head and shoulders above anything available on any other modern AVRs to date. The programmable gain amplifier is a new capability, and it remains to be seen what sort of feats of analog measurement people are able to get out of it; it certainly appears promising. It will be especially interesting to understand the differences between using the PGA at 1x gain, vs not using the PGA, and the benefits and disadvantages of doing so. (Microchip would be well-served by a document that discussed how to choose the right ADC configuration for a task in the general case; I have raised this concern with Microchip and the person who I spoke to indicated that it was a high priority; while the situation has been greatly improved, it still appears that the doc group was specifically instructed not to make any actual concrete recommendations of any sort. This is unfortunate, because that's what I think most of us would like to see!).
The addition of 1024-sample accumulation for the purposes of oversampling and decimation is a welcome addition, though one which also risks underestimating the magnitude and relevance of offset error. (Taking 1024 samples, (all of which have a given offset error), then decimating the sum to yield a 17-bit ADC measurement makes it easy to imagine that any error would be confined to the lowest couple of bits. But if the error was, say 5 lsb on a single measurement, when you accumulate 1024 samples and decimate, you have an offset error of 160 it is extremely easy to see that and think it's signal not noise.
The first full size (non-tiny) chip with the new ADC is available in 28-48 pin packages with up to 64k flash. There was the usual speculation about what if anything would change from 2-series to EA-series: It looks like the answer is, one of the confusing knobs was removed, and automatic sign chopping for accumulated measurements (
The type D timer is only used for PWM on 20/24 pin 1-Series parts on the default PWM pin settings. On the smaller parts, it wouldn't let us increase the total number of PWM pins. Only the WOC and WOD pins (on PC0 and PC1 respectively) don't already have TCA-driven PWM on them. As such, since analogWrite() does not support any features that would be enabled by turning off split mode (like 16-bit PWM) or enhanced by using the type D timer (like adjusting the frequency), it would just be worse, because it would require additional space to store the routine to turn on and off PWM from two types of timer, instead of one. This is not negligible on the smaller flash parts; it is on the order of 600 bytes. 150 for digitalWrite() and 450 for analogWrite() if those are ever called on a TCD PWM pin. The optimizer should be able to optimize away that portion of those functions in this case, as long the pins used with those functions do not include any TCD PWM pins. Note the optimizer will consider them independently, that is, digitalWrite() will include the code to turn off TCD PWM if it is used with a pin that uses TCD for PWM, whether or not you ever call analogWrite() on that pin.
Unlike almost every other AVR ever (I can think of maybe 3 examples, and only one of them is a "bonus" not an "unbonus"), there are additional "bonus" features based on the flash-size of parts within a family. The 16k and 32k versions (only) have a few extra features (which also don't appear to have been considered for pricing) - they all have 2k of ram, whether 16k or 32k, they have 3 analog comparators (including a window mode option), a second - desperately needed - type B timer - and weirdest of all they have a second ADC, differing only in which pins the channels correspond to!
Unlike classic AVRs, on the these parts, the flash is mapped to the same address space as the rest of the memory. This means pgm_read_*_near()
is not needed to read directly from flash. Because of this, the compiler automatically puts any variable declared const
into PROGMEM, and accesses it appropriately - you no longer need to explicitly declare them as PROGMEM. This includes quoted string literals, so the F() macro is no longer needed either, though to maintain compatibility with some third party libraries, f() still declares it's argument PROGMEM.
However, do note that if you explicitly declare a variable PROGMEM, you must still use the pgm_read
functions to read it, just like on classic AVRs. When a variable is declared PROGMEM on parts with memory mapped flash, the pointer is offset (address is relative to start of flash, not start of address space); this same offset is applied when using the pgm_read_*_near()
macros. Do note that declaring things PROGMEM and accessing with pgm_read_*_near
functions, although it works fine, is slower and wastes a small amount of flash (compared to simply declaring the variables const); the same goes for the F() macro with constant strings in 2.1.0 and later (for a period of time before 2.1.0, F()
did nothing - but that caused problems for third party libraries). The authors maintained that the problem was with the core, not the library, and my choice was to accept less efficiency, or deny my users access to popular libraries). Using the F()
macro may be necessary for compatibility with some third party libraries (the specific cases that forced the return of F()
upon us were not of that sort - we were actually able to make the ones I knew of work with the F()-as-noop code, and they took up a few bytes less flash as a result).
The automotive versions should also work. You must always select the 16 MHz-derived clock speeds on these parts. They do not support 20 MHz operation, and tuned clock options should not be used.
Now on to the good part, where we get to talk about how all this is exposed by megaTinyCore. We will start with the matter of how you should refer to pins for best results, and then move on to core features, menu options, before ending with a series of links to documents with more detail on various subsystems.
The simple matter of how to refer to a pin for analogRead() and digitalRead(), particularly on non-standard hardware, has been a persistent source of confusion among Arduino users. It's my opinion that much of the blame rests with the decisions made by the Arduino team (and author of Wiring before them) regarding how pins were to be referred to; the designation of some pins as "analog pins" leads people to think that those pins cannot be used for digital operations (they are better thought of as "pins with analog input" - like how there are "pins that can output PWM"). The fact that pins have traditionally been renumbered has further muddied the water. For non-standard classic AVR parts, matters are often made even worse by multiple, incompatible "pin mappings" created by various authors over the years to make the part act "more like an Uno" or for some other purpose (ATTinyCore is a particular mess in this way, with some parts having three entirely different pin mappings, in at least one case, one of the alternate mappings is a devil-inspired work of pure evil, requiring nothing short of an additional lookup table to convert analog pins to digital pins).
This core uses a simple scheme for assigning the Arduino pin numbers: Pins are numbered starting from the the I/O pin closest to Vcc as pin 0 and proceeding counterclockwise, skipping the (mostly) non-usable UPDI pin. The UPDI pin is then assigned to the last pin number (as noted above, it is possible to read the UPDI pin (both analog and digital reads work) even if it is not set as GPIO). We recommend this as a last resort: the UPDI pin always has its pullup enabled when not set as a GPIO pin, and a signal which looks too much like the UPDI enable sequence will cause undesired operation.
In order to prevent all confusion about pin identities and eliminate ambiguity, we recommend using the PIN_Pxn notation to refer to pins unless you are using a development board with different numbers or names for the pins printed on it. This will maximize portability of your code to other similar hardware and make it easier to look up information on the pins you are using in the relevant datasheets, should that be necessary.
This is the recommended way to refer to pins #defines
are also provided of form PIN_Pxn
, where x
is A, B, or C, and n
is a number 0-7 - (Not to be confused with the PIN_An defines described below). These just resolve to the digital pin number of the pin in question - they don't go through a different code path or anything. However, they have particular utility in writing code that works across the product line with peripherals that are linked to certain pins (by Port), as most peripherals are. Several pieces of demo code in the documentation take advantage of this. Direct port manipulation is possible as well - and in fact several powerful additional options are available for it - see direct port manipulation.
PIN_Pxn
- not Pxn
, and not PIN_xn
- those mean different things!
When a single number is used to refer to a pin - in the documentation, or in your code - it is always the "Arduino pin number". These are the pin numbers shown in orange (for pins capable of analogRead()) and blue (for pins that are not) on the pinout charts. All of the other ways of referring to pins are #defined to the corresponding Arduino pin number.
The core also provides An
and PIN_An
constants (where n
is a number from 0 to 11). As with the official core, PIN_An
is defined as the digital pin number of the pin shared with analog channel n These refer to the ADC0 channel numbers. This naming system is similar to what was used on many classic AVR cores but here, they are just #defined as the corresponding Arduino pin number. If you need to get the analog channel number on a digital pin, use the digitalPinToAnalogInput(pin)
macro - but you only need that if you're writing an advanced ADC library.
These parts (well, the 1/2-Series at least - the 0-Series was meant as a budget option, except they failed to shrink the budget, and they're only a couple of cents cheaper) provide an excellent toolbox of versatile and powerful peripherals; the top-end ones are on a par with or better than classic megaAVR parts - for a tinyAVR price. One of the guiding principles of the design of megaTinyCore, as with my other cores, is to allow the supported parts to reach their full potential - or as close to that as possible within the limitations of Arduino. This (very large) section covers the features of these parts and how they are exposed by megaTinyCore, as well as features of the core itself. This (very large) section attempts to cover each of the feature areas. Do try to find the feature you are working with if you're trying to use some chip feature and having trouble!
20 MHz Internal (4.5v-5.5v - typical for 5v systems)
16 MHz Internal (4.5v-5.5v - typical for 5v systems)
10 MHz Internal (2.7v-5.5v - typical for 3.3v systems)
8 MHz Internal (2.7v-5.5v - typical for 3.3v systems)
5 MHz Internal (1.8v-5.5v)
4 MHz Internal (1.8v-5.5v)
2 MHz Internal (1.8v-5.5v, poorly tested)
1 MHz Internal (1.8v-5.5v, poorly tested)
20 MHz External Clock (4.5v-5.5v, poorly tested)
16 MHz External Clock (4.5v-5.5v, poorly tested)
12 MHz External Clock (2.7v-5.5v, poorly tested)
10 MHz External Clock (2.7v-5.5v, poorly tested)
8 MHz External Clock (2.7v-5.5v, poorly tested)
6 MHz Internal (tuned, untested)
5 MHz Internal (tuned, poorly tested)
4 MHz Internal (tuned, poorly tested)
2 MHz Internal (tuned, poorly tested)
1 MHz Internal (tuned, poorly tested))
7 MHz Internal (tuned, for masochists, untested)
8 MHz Internal (tuned, poorly tested)
10 MHz Internal (tuned, poorly tested)
12 MHz Internal (tuned, untested)
14 MHz Internal (tuned, for masochists, untested)
16 MHz Internal (tuned)
20 MHz Internal (tuned)
24 MHz Internal (tuned, overclocked, poorly tested)
25 MHz Internal (tuned, overclocked, poorly tested)
30 MHz Internal (tuned, overclocked, poorly tested) - 0/1-Series require "20MHz" OSCCFG fuse setting; 2-Series parts may or may not be able to reach 30 with "16 MHz" selected.
32 MHz Internal (tuned, overclocked, poorly tested) - 2-Series only, very optimistic overclocking, may be unstable.
24 MHz External clock (Overclocked, poorly tested)
25 MHz External clock (Overclocked, poorly tested)
30 MHz External clock (Overclocked, poorly tested)
32 MHz External clock (Overclocked, poorly tested)
We make no claims about voltage or temperature ranges for overclocked parts - all we claim is that at least one of chips we have worked at that speed at room temperature, running a specific sketch, at 5v. Your mileage is expected to vary, but to be generally better with an F spec versus an N or U spec part.
Important - Read about Tuning before selecting any tuned option!
More information on these clock speeds can be found in the Clock Reference
Voltages shown are those guaranteed to work by manufacturer specifications (. Unless pushing the bounds of the operating temperature range, these parts will typically do far better (2-Series generally work at 32 MHz and 5v @ room temperature even from internal oscillator; the 0/1-Series will likewise usually work at 32 MHz with external clock provided the power supply is a stable 5.0-5.5V).
No action is required to set the OSCCFG
fuse when the sketch is uploaded via UPDI. When uploaded through Optiboot, the fuse cannot be changed, so whatever was chosen when the bootloader was burned is what is used, and only "burn bootloader" or uploading a sketch via UPDI will change that.
All internal oscillator clock speed options use the factory default calibration unless a "tuned" option is selected, in which case the calibration is adjusted as documented in the Tuning Reference. This can be used to get 16 MHz operation on an optiboot chip fused for 20 MHz and vice versa.
See Speed Grade reference for more information on the manufacturer's speed grades. Note that those are the voltages and clock speeds at which it is guaranteed to work. These parts are intended to be suitable for use in applications where an unexpected glitch of some description could pose a hazard to persons or property (think cars, industrial equipment, airplanes, nuclear reactors - places where people could die if the part malfunctioned) and I believe for military applications as well, which have similar reliability requirements, just for the opposite reason. Typical hobby users will be far more relaxed about the potential for stability issues, with crashes being little more than a nuisance, and the extremes of the extended temperature range parts being far beyond what we would ever need. Assuming the board had a waterproof coating, thermally, an N grade part should be able to function per the speed grade in a pot of boiling water. And that's just the N-spec. The F-spec should be good to 125!
It has been established that the extended temperature parts overclock better which makes sense. A part that is spec'ed to run at 20 MHz at 125C would be expected to have a better chance of running at 32 MHz at room temperature than one spec'ed only to run at 20 MHz only at 105C
As of version 2.4.0, we now provide an "Official Microchip Board" option. This doesn't do anything special other than defining LED_BUILTIN
to be the pin that has the LED on that board, instead of A7, and defining a macro PIN_BUTTON_BUILTIN
defined as the pin with the user button on it and making "upload" with the non-optiboot version always use the onboard programmer/debugger; tools -> programmer will be used only for "burn bootloader" and "upload using programmer". In the case of the ATtiny416 XPlained Nano, it also selects the version of the bootloader that uses the alternate pins for the serial port - it does not automatically use the alternate pins for USART0 as if you'd done Serial.swap(1) yet - functionality to support default swapping of serial pins will come in a future update, alongside some other changes in the machinery underlying the pinswap mechanism which will hopefully also reduce flash usage.
As noted above, these may not work correctly on 32-bit Linux platforms. This is beyond my control; I don't build avrdude binaries amd I am not taking on that task too. I have too many already.
blink()
Take More Flash on the XPlained Mini vs the XPlained Pro?Both have the same ATtiny817! How can they be different?
For the same reason that blink will take more flash if you change it to use PIN_PC0
as opposed to PIN_PB4
: PC0, used on the XPlained Mini is a PWM pin, while PB4, used by the XPlained Pro is not. Since that is the only pin that digitalWrite() is being used on, the compiler is free to optimize away anything that isn't needed for digitalWrite() on that pin, including the functionality to turn off PWM output on a pin that supports PWM. The difference vanishes if digitalWrite() is also used on a pin that supports PWM on both devices (resulting in the higher flash use result) or if digitalWrite() is replaced with digitalWriteFast(), which will use less flash (but assumes you won't call it on a pin outputting PWM).
Whenever a UPDI programmer is used to upload code, all fuses that can be set "safely" (as in, without risk of bricking the board, or bricking the board if one does not have access to an HV programmer), and which have any built-in configuration options, will be set. Thus, except where noted, behavior will always match the selected tools menu. In summary, these are handled as follows:
WDTCFG will not be changed - it is not configured by megaTinyCore except to reset it to the factory default when doing "burn bootloader".
BODCFG will not be changed - not safe, you could set the BOD level to 4.3 on a 3.3v system, and then it would need to get > 4.3v applied to reprogram it. If it is on the same circuit board as parts that would be damaged, this is a difficult situation to recover from.
OSCCFG will be set
TCD0CFG will not be changed - it is not configured by megaTinyCore except to reset it to the factory default when doing "burn bootloader".
SYSCFG0 will not be changed - not safe
SYSCFG1 will be set
APPEND will not be changed - it is not configured by megaTinyCore. There is insufficient demand to justify the development effort.to make use of this as DxCore does
BOOTEND will be set
LOCKBIT will not be changed - it is not configured by megaTinyCore; supporting the lockbits presents several additional complications, and commercial users with need of this facility are unlikely to be using the Arduino IDE to program production units.
BODCFG
is not safe, because setting this to a higher voltage than board is running at and enabling it will "brick" the board until a higher operating voltage can be supplied; this could be particularly awkward if it is soldered to the same PCB as devices which will not tolerate those voltages.
SYSCFG0
is not safe because this is where RSTPINCFG
lives; changing this can leave the board unprogrammable except via HV UPDI programming, and not everyone has an HV UPDI programmer. In the future if/when a programmer that guarantees HV UPDI capability which can be selected as a programmer (ie, it becomes possible to make a tools -> programmer option which will only work with HV programmers) this fuse will be set automatically when using that programmer.
As a result in 2.2.0 and later, you no longer need to 'burn bootloader' to switch between 16-MHz-derived and 20-MHz-derived speeds when uploading using UPDI
This core always uses Link Time Optimization to reduce flash usage - all versions of the compiler which support the tinyAVR 0/1/2-Series parts also support LTO, so there is no need to make it optional, as was done with ATTinyCore. This was a HUGE improvement in codesize when introduced, typically on the order of 5-20%!
These parts all have a large number of analog inputs - DA and DB-series have up to 22 analog inputs, while the DD-series has analog input on every pin that is not used to drive the HF crystal (though the pins on PORTC are only supported when MVIO is turned off). They can be read with analogRead()
like on a normal AVR, and we default to 10-bit resolution; you can change to the full 12-bit with analogReadResolution()
, and use the enhanced analogRead functions to take automatically oversampled, decimated readings for higher resolution and to take differential measurements. There are 4 internal voltage references in 1.024, 2.048, 4.096 and 2.5V, plus support for external reference voltage (and Vdd of course). ADC readings are taken 3 times faster than an classic AVR, and that speed can be doubled again if what you are measuring is low impedance, or extend the sampling time by a factor greatly for reading very high impedance sources. This is detailed in the analog reference.
The Dx-series parts have a 10-bit DAC which can generate a real analog voltage (note that this provides low current and can only be used as a voltage reference or control voltage, it cannot be