Merge branch 'rmobile/fsi-despair' into rmobile-fixes-for-linus

This commit is contained in:
Paul Mundt 2010-11-24 16:21:08 +09:00
commit 5405652571
236 changed files with 2792 additions and 1390 deletions

View file

@ -16,7 +16,7 @@
</orgname>
<address>
<email>hjk@linutronix.de</email>
<email>hjk@hansjkoch.de</email>
</address>
</affiliation>
</author>
@ -114,7 +114,7 @@ GPL version 2.
<para>If you know of any translations for this document, or you are
interested in translating it, please email me
<email>hjk@linutronix.de</email>.
<email>hjk@hansjkoch.de</email>.
</para>
</sect1>
@ -171,7 +171,7 @@ interested in translating it, please email me
<title>Feedback</title>
<para>Find something wrong with this document? (Or perhaps something
right?) I would love to hear from you. Please email me at
<email>hjk@linutronix.de</email>.</para>
<email>hjk@hansjkoch.de</email>.</para>
</sect1>
</chapter>

View file

@ -154,7 +154,7 @@ The stages that a patch goes through are, generally:
inclusion, it should be accepted by a relevant subsystem maintainer -
though this acceptance is not a guarantee that the patch will make it
all the way to the mainline. The patch will show up in the maintainer's
subsystem tree and into the staging trees (described below). When the
subsystem tree and into the -next trees (described below). When the
process works, this step leads to more extensive review of the patch and
the discovery of any problems resulting from the integration of this
patch with work being done by others.
@ -236,7 +236,7 @@ finding the right maintainer. Sending patches directly to Linus is not
normally the right way to go.
2.4: STAGING TREES
2.4: NEXT TREES
The chain of subsystem trees guides the flow of patches into the kernel,
but it also raises an interesting question: what if somebody wants to look
@ -250,7 +250,7 @@ changes land in the mainline kernel. One could pull changes from all of
the interesting subsystem trees, but that would be a big and error-prone
job.
The answer comes in the form of staging trees, where subsystem trees are
The answer comes in the form of -next trees, where subsystem trees are
collected for testing and review. The older of these trees, maintained by
Andrew Morton, is called "-mm" (for memory management, which is how it got
started). The -mm tree integrates patches from a long list of subsystem
@ -275,7 +275,7 @@ directory at:
Use of the MMOTM tree is likely to be a frustrating experience, though;
there is a definite chance that it will not even compile.
The other staging tree, started more recently, is linux-next, maintained by
The other -next tree, started more recently, is linux-next, maintained by
Stephen Rothwell. The linux-next tree is, by design, a snapshot of what
the mainline is expected to look like after the next merge window closes.
Linux-next trees are announced on the linux-kernel and linux-next mailing
@ -303,12 +303,25 @@ volatility of linux-next tends to make it a difficult development target.
See http://lwn.net/Articles/289013/ for more information on this topic, and
stay tuned; much is still in flux where linux-next is involved.
Besides the mmotm and linux-next trees, the kernel source tree now contains
the drivers/staging/ directory and many sub-directories for drivers or
filesystems that are on their way to being added to the kernel tree
proper, but they remain in drivers/staging/ while they still need more
work.
2.4.1: STAGING TREES
The kernel source tree now contains the drivers/staging/ directory, where
many sub-directories for drivers or filesystems that are on their way to
being added to the kernel tree live. They remain in drivers/staging while
they still need more work; once complete, they can be moved into the
kernel proper. This is a way to keep track of drivers that aren't
up to Linux kernel coding or quality standards, but people may want to use
them and track development.
Greg Kroah-Hartman currently (as of 2.6.36) maintains the staging tree.
Drivers that still need work are sent to him, with each driver having
its own subdirectory in drivers/staging/. Along with the driver source
files, a TODO file should be present in the directory as well. The TODO
file lists the pending work that the driver needs for acceptance into
the kernel proper, as well as a list of people that should be Cc'd for any
patches to the driver. Staging drivers that don't currently build should
have their config entries depend upon CONFIG_BROKEN. Once they can
be successfully built without outside patches, CONFIG_BROKEN can be removed.
2.5: TOOLS

View file

@ -89,7 +89,7 @@ static ssize_t childless_storeme_write(struct childless *childless,
char *p = (char *) page;
tmp = simple_strtoul(p, &p, 10);
if (!p || (*p && (*p != '\n')))
if ((*p != '\0') && (*p != '\n'))
return -EINVAL;
if (tmp > INT_MAX)

View file

@ -617,6 +617,16 @@ and have the following read/write attributes:
is configured as an output, this value may be written;
any nonzero value is treated as high.
If the pin can be configured as interrupt-generating interrupt
and if it has been configured to generate interrupts (see the
description of "edge"), you can poll(2) on that file and
poll(2) will return whenever the interrupt was triggered. If
you use poll(2), set the events POLLPRI and POLLERR. If you
use select(2), set the file descriptor in exceptfds. After
poll(2) returns, either lseek(2) to the beginning of the sysfs
file and read the new value or close the file and re-open it
to read the value.
"edge" ... reads as either "none", "rising", "falling", or
"both". Write these strings to select the signal edge(s)
that will make poll(2) on the "value" file return.

View file

@ -11,7 +11,7 @@ Authors:
Mark M. Hoffman <mhoffman@lightlink.com>
Ported to 2.6 by Eric J. Bowersox <ericb@aspsys.com>
Adapted to 2.6.20 by Carsten Emde <ce@osadl.org>
Modified for mainline integration by Hans J. Koch <hjk@linutronix.de>
Modified for mainline integration by Hans J. Koch <hjk@hansjkoch.de>
Module Parameters
-----------------

View file

@ -8,7 +8,7 @@ Supported chips:
Datasheet: http://pdfserv.maxim-ic.com/en/ds/MAX6650-MAX6651.pdf
Authors:
Hans J. Koch <hjk@linutronix.de>
Hans J. Koch <hjk@hansjkoch.de>
John Morris <john.morris@spirentcom.com>
Claus Gindhart <claus.gindhart@kontron.com>

View file

@ -1359,7 +1359,7 @@ F: include/net/bluetooth/
BONDING DRIVER
M: Jay Vosburgh <fubar@us.ibm.com>
L: bonding-devel@lists.sourceforge.net
L: netdev@vger.kernel.org
W: http://sourceforge.net/projects/bonding/
S: Supported
F: drivers/net/bonding/
@ -1829,6 +1829,13 @@ W: http://www.chelsio.com
S: Supported
F: drivers/net/cxgb4vf/
STMMAC ETHERNET DRIVER
M: Giuseppe Cavallaro <peppe.cavallaro@st.com>
L: netdev@vger.kernel.org
W: http://www.stlinux.com
S: Supported
F: drivers/net/stmmac/
CYBERPRO FB DRIVER
M: Russell King <linux@arm.linux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -2008,6 +2015,7 @@ F: drivers/hwmon/dme1737.c
DOCBOOK FOR DOCUMENTATION
M: Randy Dunlap <rdunlap@xenotime.net>
S: Maintained
F: scripts/kernel-doc
DOCKING STATION DRIVER
M: Shaohua Li <shaohua.li@intel.com>
@ -2018,6 +2026,7 @@ F: drivers/acpi/dock.c
DOCUMENTATION
M: Randy Dunlap <rdunlap@xenotime.net>
L: linux-doc@vger.kernel.org
T: quilt oss.oracle.com/~rdunlap/kernel-doc-patches/current/
S: Maintained
F: Documentation/

View file

@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 37
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION*

View file

@ -359,8 +359,8 @@ static struct clk_lookup dm355_clks[] = {
CLK(NULL, "uart1", &uart1_clk),
CLK(NULL, "uart2", &uart2_clk),
CLK("i2c_davinci.1", NULL, &i2c_clk),
CLK("davinci-asp.0", NULL, &asp0_clk),
CLK("davinci-asp.1", NULL, &asp1_clk),
CLK("davinci-mcbsp.0", NULL, &asp0_clk),
CLK("davinci-mcbsp.1", NULL, &asp1_clk),
CLK("davinci_mmc.0", NULL, &mmcsd0_clk),
CLK("davinci_mmc.1", NULL, &mmcsd1_clk),
CLK("spi_davinci.0", NULL, &spi0_clk),
@ -664,7 +664,7 @@ static struct resource dm355_asp1_resources[] = {
};
static struct platform_device dm355_asp1_device = {
.name = "davinci-asp",
.name = "davinci-mcbsp",
.id = 1,
.num_resources = ARRAY_SIZE(dm355_asp1_resources),
.resource = dm355_asp1_resources,

View file

@ -459,7 +459,7 @@ static struct clk_lookup dm365_clks[] = {
CLK(NULL, "usb", &usb_clk),
CLK("davinci_emac.1", NULL, &emac_clk),
CLK("davinci_voicecodec", NULL, &voicecodec_clk),
CLK("davinci-asp.0", NULL, &asp0_clk),
CLK("davinci-mcbsp", NULL, &asp0_clk),
CLK(NULL, "rto", &rto_clk),
CLK(NULL, "mjcp", &mjcp_clk),
CLK(NULL, NULL, NULL),
@ -922,8 +922,8 @@ static struct resource dm365_asp_resources[] = {
};
static struct platform_device dm365_asp_device = {
.name = "davinci-asp",
.id = 0,
.name = "davinci-mcbsp",
.id = -1,
.num_resources = ARRAY_SIZE(dm365_asp_resources),
.resource = dm365_asp_resources,
};

View file

@ -302,7 +302,7 @@ static struct clk_lookup dm644x_clks[] = {
CLK("davinci_emac.1", NULL, &emac_clk),
CLK("i2c_davinci.1", NULL, &i2c_clk),
CLK("palm_bk3710", NULL, &ide_clk),
CLK("davinci-asp", NULL, &asp_clk),
CLK("davinci-mcbsp", NULL, &asp_clk),
CLK("davinci_mmc.0", NULL, &mmcsd_clk),
CLK(NULL, "spi", &spi_clk),
CLK(NULL, "gpio", &gpio_clk),
@ -580,7 +580,7 @@ static struct resource dm644x_asp_resources[] = {
};
static struct platform_device dm644x_asp_device = {
.name = "davinci-asp",
.name = "davinci-mcbsp",
.id = -1,
.num_resources = ARRAY_SIZE(dm644x_asp_resources),
.resource = dm644x_asp_resources,

View file

@ -567,38 +567,127 @@ static struct platform_device *qhd_devices[] __initdata = {
/* FSI */
#define IRQ_FSI evt2irq(0x1840)
static int __fsi_set_rate(struct clk *clk, long rate, int enable)
{
int ret = 0;
static int fsi_set_rate(int is_porta, int rate)
if (rate <= 0)
return ret;
if (enable) {
ret = clk_set_rate(clk, rate);
if (0 == ret)
ret = clk_enable(clk);
} else {
clk_disable(clk);
}
return ret;
}
static int __fsi_set_round_rate(struct clk *clk, long rate, int enable)
{
return __fsi_set_rate(clk, clk_round_rate(clk, rate), enable);
}
static int fsi_ak4642_set_rate(struct device *dev, int rate, int enable)
{
struct clk *fsia_ick;
struct clk *fsiack;
int ret = -EIO;
fsia_ick = clk_get(dev, "icka");
if (IS_ERR(fsia_ick))
return PTR_ERR(fsia_ick);
/*
* FSIACK is connected to AK4642,
* and use external clock pin from it.
* it is parent of fsia_ick now.
*/
fsiack = clk_get_parent(fsia_ick);
if (!fsiack)
goto fsia_ick_out;
/*
* we get 1/1 divided clock by setting same rate to fsiack and fsia_ick
*
** FIXME **
* Because the freq_table of external clk (fsiack) are all 0,
* the return value of clk_round_rate became 0.
* So, it use __fsi_set_rate here.
*/
ret = __fsi_set_rate(fsiack, rate, enable);
if (ret < 0)
goto fsiack_out;
ret = __fsi_set_round_rate(fsia_ick, rate, enable);
if ((ret < 0) && enable)
__fsi_set_round_rate(fsiack, rate, 0); /* disable FSI ACK */
fsiack_out:
clk_put(fsiack);
fsia_ick_out:
clk_put(fsia_ick);
return 0;
}
static int fsi_hdmi_set_rate(struct device *dev, int rate, int enable)
{
struct clk *fsib_clk;
struct clk *fdiv_clk = &sh7372_fsidivb_clk;
long fsib_rate = 0;
long fdiv_rate = 0;
int ackmd_bpfmd;
int ret;
/* set_rate is not needed if port A */
if (is_porta)
return 0;
fsib_clk = clk_get(NULL, "fsib_clk");
if (IS_ERR(fsib_clk))
return -EINVAL;
switch (rate) {
case 44100:
clk_set_rate(fsib_clk, clk_round_rate(fsib_clk, 11283000));
ret = SH_FSI_ACKMD_256 | SH_FSI_BPFMD_64;
fsib_rate = rate * 256;
ackmd_bpfmd = SH_FSI_ACKMD_256 | SH_FSI_BPFMD_64;
break;
case 48000:
clk_set_rate(fsib_clk, clk_round_rate(fsib_clk, 85428000));
clk_set_rate(fdiv_clk, clk_round_rate(fdiv_clk, 12204000));
ret = SH_FSI_ACKMD_256 | SH_FSI_BPFMD_64;
fsib_rate = 85428000; /* around 48kHz x 256 x 7 */
fdiv_rate = rate * 256;
ackmd_bpfmd = SH_FSI_ACKMD_256 | SH_FSI_BPFMD_64;
break;
default:
pr_err("unsupported rate in FSI2 port B\n");
ret = -EINVAL;
break;
return -EINVAL;
}
/* FSI B setting */
fsib_clk = clk_get(dev, "ickb");
if (IS_ERR(fsib_clk))
return -EIO;
ret = __fsi_set_round_rate(fsib_clk, fsib_rate, enable);
clk_put(fsib_clk);
if (ret < 0)
return ret;
/* FSI DIV setting */
ret = __fsi_set_round_rate(fdiv_clk, fdiv_rate, enable);
if (ret < 0) {
/* disable FSI B */
if (enable)
__fsi_set_round_rate(fsib_clk, fsib_rate, 0);
return ret;
}
return ackmd_bpfmd;
}
static int fsi_set_rate(struct device *dev, int is_porta, int rate, int enable)
{
int ret;
if (is_porta)
ret = fsi_ak4642_set_rate(dev, rate, enable);
else
ret = fsi_hdmi_set_rate(dev, rate, enable);
return ret;
}
@ -880,6 +969,11 @@ static int __init hdmi_init_pm_clock(void)
goto out;
}
ret = clk_enable(&sh7372_pllc2_clk);
if (ret < 0) {
pr_err("Cannot enable pllc2 clock\n");
goto out;
}
pr_debug("PLLC2 set frequency %lu\n", rate);
ret = clk_set_parent(hdmi_ick, &sh7372_pllc2_clk);
@ -896,23 +990,11 @@ static int __init hdmi_init_pm_clock(void)
device_initcall(hdmi_init_pm_clock);
#define FSIACK_DUMMY_RATE 48000
static int __init fsi_init_pm_clock(void)
{
struct clk *fsia_ick;
int ret;
/*
* FSIACK is connected to AK4642,
* and the rate is depend on playing sound rate.
* So, set dummy rate (= 48k) here
*/
ret = clk_set_rate(&sh7372_fsiack_clk, FSIACK_DUMMY_RATE);
if (ret < 0) {
pr_err("Cannot set FSIACK dummy rate: %d\n", ret);
return ret;
}
fsia_ick = clk_get(&fsi_device.dev, "icka");
if (IS_ERR(fsia_ick)) {
ret = PTR_ERR(fsia_ick);
@ -921,16 +1003,9 @@ static int __init fsi_init_pm_clock(void)
}
ret = clk_set_parent(fsia_ick, &sh7372_fsiack_clk);
if (ret < 0) {
pr_err("Cannot set FSI-A parent: %d\n", ret);
goto out;
}
ret = clk_set_rate(fsia_ick, FSIACK_DUMMY_RATE);
if (ret < 0)
pr_err("Cannot set FSI-A rate: %d\n", ret);
pr_err("Cannot set FSI-A parent: %d\n", ret);
out:
clk_put(fsia_ick);
return ret;

View file

@ -230,21 +230,13 @@ static int pllc2_set_rate(struct clk *clk,
if (idx < 0)
return idx;
if (rate == clk->parent->rate) {
pllc2_disable(clk);
return 0;
}
if (rate == clk->parent->rate)
return -EINVAL;
value = __raw_readl(PLLC2CR) & ~(0x3f << 24);
if (value & 0x80000000)
pllc2_disable(clk);
__raw_writel((value & ~0x80000000) | ((idx + 19) << 24), PLLC2CR);
if (value & 0x80000000)
return pllc2_enable(clk);
return 0;
}
@ -453,10 +445,8 @@ static int fsidiv_enable(struct clk *clk)
unsigned long value;
value = __raw_readl(clk->mapping->base) >> 16;
if (value < 2) {
fsidiv_disable(clk);
return -ENOENT;
}
if (value < 2)
return -EIO;
__raw_writel((value << 16) | 0x3, clk->mapping->base);
@ -468,17 +458,12 @@ static int fsidiv_set_rate(struct clk *clk,
{
int idx;
if (clk->parent->rate == rate) {
fsidiv_disable(clk);
return 0;
}
idx = (clk->parent->rate / rate) & 0xffff;
if (idx < 2)
return -ENOENT;
return -EINVAL;
__raw_writel(idx << 16, clk->mapping->base);
return fsidiv_enable(clk);
return 0;
}
static struct clk_ops fsidiv_clk_ops = {
@ -609,8 +594,6 @@ static struct clk_lookup lookups[] = {
CLKDEV_CON_ID("vck3_clk", &div6_clks[DIV6_VCK3]),
CLKDEV_CON_ID("fmsi_clk", &div6_clks[DIV6_FMSI]),
CLKDEV_CON_ID("fmso_clk", &div6_clks[DIV6_FMSO]),
CLKDEV_CON_ID("fsia_clk", &div6_reparent_clks[DIV6_FSIA]),
CLKDEV_CON_ID("fsib_clk", &div6_reparent_clks[DIV6_FSIB]),
CLKDEV_CON_ID("sub_clk", &div6_clks[DIV6_SUB]),
CLKDEV_CON_ID("spu_clk", &div6_clks[DIV6_SPU]),
CLKDEV_CON_ID("vou_clk", &div6_clks[DIV6_VOU]),

View file

@ -4,6 +4,10 @@ config PPC32
bool
default y if !PPC64
config 32BIT
bool
default y if PPC32
config 64BIT
bool
default y if PPC64

View file

@ -33,9 +33,10 @@ __div64_32:
cntlzw r0,r5 # we are shifting the dividend right
li r10,-1 # to make it < 2^32, and shifting
srw r10,r10,r0 # the divisor right the same amount,
add r9,r4,r10 # rounding up (so the estimate cannot
addc r9,r4,r10 # rounding up (so the estimate cannot
andc r11,r6,r10 # ever be too large, only too small)
andc r9,r9,r10
addze r9,r9
or r11,r5,r11
rotlw r9,r9,r0
rotlw r11,r11,r0

View file

@ -337,7 +337,7 @@ char *dbg_get_reg(int regno, void *mem, struct pt_regs *regs)
/* FP registers 32 -> 63 */
#if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE)
if (current)
memcpy(mem, current->thread.evr[regno-32],
memcpy(mem, &current->thread.evr[regno-32],
dbg_reg_def[regno].size);
#else
/* fp registers not used by kernel, leave zero */
@ -362,7 +362,7 @@ int dbg_set_reg(int regno, void *mem, struct pt_regs *regs)
if (regno >= 32 && regno < 64) {
/* FP registers 32 -> 63 */
#if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_SPE)
memcpy(current->thread.evr[regno-32], mem,
memcpy(&current->thread.evr[regno-32], mem,
dbg_reg_def[regno].size);
#else
/* fp registers not used by kernel, leave zero */

View file

@ -497,9 +497,8 @@ static void __init emergency_stack_init(void)
}
/*
* Called into from start_kernel, after lock_kernel has been called.
* Initializes bootmem, which is unsed to manage page allocation until
* mem_init is called.
* Called into from start_kernel this initializes bootmem, which is used
* to manage page allocation until mem_init is called.
*/
void __init setup_arch(char **cmdline_p)
{

View file

@ -1123,7 +1123,7 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
else
#endif /* CONFIG_PPC_HAS_HASH_64K */
rc = __hash_page_4K(ea, access, vsid, ptep, trap, local, ssize,
subpage_protection(pgdir, ea));
subpage_protection(mm, ea));
/* Dump some info in case of hash insertion failure, they should
* never happen so it is really useful to know if/when they do

View file

@ -138,8 +138,11 @@
cmpldi cr0,r15,0 /* Check for user region */
std r14,EX_TLB_ESR(r12) /* write crazy -1 to frame */
beq normal_tlb_miss
li r11,_PAGE_PRESENT|_PAGE_BAP_SX /* Base perm */
oris r11,r11,_PAGE_ACCESSED@h
/* XXX replace the RMW cycles with immediate loads + writes */
1: mfspr r10,SPRN_MAS1
mfspr r10,SPRN_MAS1
cmpldi cr0,r15,8 /* Check for vmalloc region */
rlwinm r10,r10,0,16,1 /* Clear TID */
mtspr SPRN_MAS1,r10

View file

@ -585,6 +585,6 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
ppc64_rma_size = min_t(u64, first_memblock_size, 0x40000000);
/* Finally limit subsequent allocations */
memblock_set_current_limit(ppc64_memblock_base + ppc64_rma_size);
memblock_set_current_limit(first_memblock_base + ppc64_rma_size);
}
#endif /* CONFIG_PPC64 */

View file

@ -47,6 +47,12 @@ config LPARCFG
config PPC_PSERIES_DEBUG
depends on PPC_PSERIES && PPC_EARLY_DEBUG
bool "Enable extra debug logging in platforms/pseries"
help
Say Y here if you want the pseries core to produce a bunch of
debug messages to the system log. Select this if you are having a
problem with the pseries core and want to see more of what is
going on. This does not enable debugging in lpar.c, which must
be manually done due to its verbosity.
default y
config PPC_SMLPAR

View file

@ -21,8 +21,6 @@
* Please address comments and feedback to Linas Vepstas <linas@austin.ibm.com>
*/
#undef DEBUG
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/list.h>

View file

@ -25,8 +25,6 @@
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#undef DEBUG
#include <linux/pci.h>
#include <asm/pci-bridge.h>
#include <asm/ppc-pci.h>

View file

@ -61,9 +61,9 @@ DEFINE_GUEST_HANDLE(void);
#define HYPERVISOR_VIRT_START mk_unsigned_long(__HYPERVISOR_VIRT_START)
#endif
#ifndef machine_to_phys_mapping
#define machine_to_phys_mapping ((unsigned long *)HYPERVISOR_VIRT_START)
#endif
#define MACH2PHYS_VIRT_START mk_unsigned_long(__MACH2PHYS_VIRT_START)
#define MACH2PHYS_VIRT_END mk_unsigned_long(__MACH2PHYS_VIRT_END)
#define MACH2PHYS_NR_ENTRIES ((MACH2PHYS_VIRT_END-MACH2PHYS_VIRT_START)>>__MACH2PHYS_SHIFT)
/* Maximum number of virtual CPUs in multi-processor guests. */
#define MAX_VIRT_CPUS 32

View file

@ -32,6 +32,11 @@
/* And the trap vector is... */
#define TRAP_INSTR "int $0x82"
#define __MACH2PHYS_VIRT_START 0xF5800000
#define __MACH2PHYS_VIRT_END 0xF6800000
#define __MACH2PHYS_SHIFT 2
/*
* Virtual addresses beyond this are not modifiable by guest OSes. The
* machine->physical mapping table starts at this address, read-only.

View file

@ -39,18 +39,7 @@
#define __HYPERVISOR_VIRT_END 0xFFFF880000000000
#define __MACH2PHYS_VIRT_START 0xFFFF800000000000
#define __MACH2PHYS_VIRT_END 0xFFFF804000000000
#ifndef HYPERVISOR_VIRT_START
#define HYPERVISOR_VIRT_START mk_unsigned_long(__HYPERVISOR_VIRT_START)
#define HYPERVISOR_VIRT_END mk_unsigned_long(__HYPERVISOR_VIRT_END)
#endif
#define MACH2PHYS_VIRT_START mk_unsigned_long(__MACH2PHYS_VIRT_START)
#define MACH2PHYS_VIRT_END mk_unsigned_long(__MACH2PHYS_VIRT_END)
#define MACH2PHYS_NR_ENTRIES ((MACH2PHYS_VIRT_END-MACH2PHYS_VIRT_START)>>3)
#ifndef machine_to_phys_mapping
#define machine_to_phys_mapping ((unsigned long *)HYPERVISOR_VIRT_START)
#endif
#define __MACH2PHYS_SHIFT 3
/*
* int HYPERVISOR_set_segment_base(unsigned int which, unsigned long base)

View file

@ -5,6 +5,7 @@
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/pfn.h>
#include <linux/mm.h>
#include <asm/uaccess.h>
#include <asm/page.h>
@ -35,6 +36,8 @@ typedef struct xpaddr {
#define MAX_DOMAIN_PAGES \
((unsigned long)((u64)CONFIG_XEN_MAX_DOMAIN_MEMORY * 1024 * 1024 * 1024 / PAGE_SIZE))
extern unsigned long *machine_to_phys_mapping;
extern unsigned int machine_to_phys_order;
extern unsigned long get_phys_to_machine(unsigned long pfn);
extern bool set_phys_to_machine(unsigned long pfn, unsigned long mfn);
@ -69,10 +72,8 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
if (xen_feature(XENFEAT_auto_translated_physmap))
return mfn;
#if 0
if (unlikely((mfn >> machine_to_phys_order) != 0))
return max_mapnr;
#endif
return ~0;
pfn = 0;
/*

View file

@ -315,14 +315,18 @@ static void kgdb_remove_all_hw_break(void)
if (!breakinfo[i].enabled)
continue;
bp = *per_cpu_ptr(breakinfo[i].pev, cpu);
if (bp->attr.disabled == 1)
if (!bp->attr.disabled) {
arch_uninstall_hw_breakpoint(bp);
bp->attr.disabled = 1;
continue;
}
if (dbg_is_early)
early_dr7 &= ~encode_dr7(i, breakinfo[i].len,
breakinfo[i].type);
else
arch_uninstall_hw_breakpoint(bp);
bp->attr.disabled = 1;
else if (hw_break_release_slot(i))
printk(KERN_ERR "KGDB: hw bpt remove failed %lx\n",
breakinfo[i].addr);
breakinfo[i].enabled = 0;
}
}

View file

@ -3395,6 +3395,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
load_host_msrs(vcpu);
kvm_load_ldt(ldt_selector);
loadsegment(fs, fs_selector);
#ifdef CONFIG_X86_64
load_gs_index(gs_selector);
@ -3402,7 +3403,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
#else
loadsegment(gs, gs_selector);
#endif
kvm_load_ldt(ldt_selector);
reload_tss(vcpu);

View file

@ -821,10 +821,9 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
#endif
#ifdef CONFIG_X86_64
if (is_long_mode(&vmx->vcpu)) {
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
if (is_long_mode(&vmx->vcpu))
wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
}
#endif
for (i = 0; i < vmx->save_nmsrs; ++i)
kvm_set_shared_msr(vmx->guest_msrs[i].index,
@ -839,23 +838,23 @@ static void __vmx_load_host_state(struct vcpu_vmx *vmx)
++vmx->vcpu.stat.host_state_reload;
vmx->host_state.loaded = 0;
if (vmx->host_state.fs_reload_needed)
loadsegment(fs, vmx->host_state.fs_sel);
#ifdef CONFIG_X86_64
if (is_long_mode(&vmx->vcpu))
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
#endif
if (vmx->host_state.gs_ldt_reload_needed) {
kvm_load_ldt(vmx->host_state.ldt_sel);
#ifdef CONFIG_X86_64
load_gs_index(vmx->host_state.gs_sel);
wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
#else
loadsegment(gs, vmx->host_state.gs_sel);
#endif
}
if (vmx->host_state.fs_reload_needed)
loadsegment(fs, vmx->host_state.fs_sel);
reload_tss();
#ifdef CONFIG_X86_64
if (is_long_mode(&vmx->vcpu)) {
rdmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
}
wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_host_kernel_gs_base);
#endif
if (current_thread_info()->status & TS_USEDFPU)
clts();

View file

@ -75,6 +75,11 @@ DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info);
enum xen_domain_type xen_domain_type = XEN_NATIVE;
EXPORT_SYMBOL_GPL(xen_domain_type);
unsigned long *machine_to_phys_mapping = (void *)MACH2PHYS_VIRT_START;
EXPORT_SYMBOL(machine_to_phys_mapping);
unsigned int machine_to_phys_order;
EXPORT_SYMBOL(machine_to_phys_order);
struct start_info *xen_start_info;
EXPORT_SYMBOL_GPL(xen_start_info);
@ -1090,6 +1095,8 @@ static void __init xen_setup_stackprotector(void)
/* First C function to be called on Xen boot */
asmlinkage void __init xen_start_kernel(void)
{
struct physdev_set_iopl set_iopl;
int rc;
pgd_t *pgd;
if (!xen_start_info)
@ -1097,6 +1104,8 @@ asmlinkage void __init xen_start_kernel(void)
xen_domain_type = XEN_PV_DOMAIN;
xen_setup_machphys_mapping();
/* Install Xen paravirt ops */
pv_info = xen_info;
pv_init_ops = xen_init_ops;
@ -1202,10 +1211,18 @@ asmlinkage void __init xen_start_kernel(void)
#else
pv_info.kernel_rpl = 0;
#endif
/* set the limit of our address space */
xen_reserve_top();
/* We used to do this in xen_arch_setup, but that is too late on AMD
* were early_cpu_init (run before ->arch_setup()) calls early_amd_init
* which pokes 0xcf8 port.
*/
set_iopl.iopl = 1;
rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
if (rc != 0)
xen_raw_printk("physdev_op failed %d\n", rc);
#ifdef CONFIG_X86_32
/* set up basic CPUID stuff */
cpu_detect(&new_cpu_data);

View file

@ -2034,6 +2034,20 @@ static __init void xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
set_page_prot(pmd, PAGE_KERNEL_RO);
}
void __init xen_setup_machphys_mapping(void)
{
struct xen_machphys_mapping mapping;
unsigned long machine_to_phys_nr_ents;
if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, &mapping) == 0) {
machine_to_phys_mapping = (unsigned long *)mapping.v_start;
machine_to_phys_nr_ents = mapping.max_mfn + 1;
} else {
machine_to_phys_nr_ents = MACH2PHYS_NR_ENTRIES;
}
machine_to_phys_order = fls(machine_to_phys_nr_ents - 1);
}
#ifdef CONFIG_X86_64
static void convert_pfn_mfn(void *v)
{
@ -2627,7 +2641,8 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
vma->vm_flags |= VM_IO | VM_RESERVED | VM_PFNMAP;
BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_RESERVED | VM_IO)) ==
(VM_PFNMAP | VM_RESERVED | VM_IO)));
rmd.mfn = mfn;
rmd.prot = prot;

View file

@ -248,8 +248,7 @@ char * __init xen_memory_setup(void)
else
extra_pages = 0;
if (!xen_initial_domain())
xen_add_extra_mem(extra_pages);
xen_add_extra_mem(extra_pages);
return "Xen";
}
@ -337,9 +336,6 @@ void __cpuinit xen_enable_syscall(void)
void __init xen_arch_setup(void)
{
struct physdev_set_iopl set_iopl;
int rc;
xen_panic_handler_init();
HYPERVISOR_vm_assist(VMASST_CMD_enable, VMASST_TYPE_4gb_segments);
@ -356,11 +352,6 @@ void __init xen_arch_setup(void)
xen_enable_sysenter();
xen_enable_syscall();
set_iopl.iopl = 1;
rc = HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
if (rc != 0)
printk(KERN_INFO "physdev_op failed %d\n", rc);
#ifdef CONFIG_ACPI
if (!(xen_start_info->flags & SIF_INITDOMAIN)) {
printk(KERN_INFO "ACPI in unprivileged domain disabled\n");

View file

@ -3166,8 +3166,8 @@ static inline int __ata_scsi_queuecmd(struct scsi_cmnd *scmd,
/**
* ata_scsi_queuecmd - Issue SCSI cdb to libata-managed device
* @shost: SCSI host of command to be sent
* @cmd: SCSI command to be sent
* @done: Completion function, called when command is complete
*
* In some cases, this function translates SCSI commands into
* ATA taskfiles, and queues the taskfiles to be sent to
@ -3177,42 +3177,39 @@ static inline int __ata_scsi_queuecmd(struct scsi_cmnd *scmd,
* ATA and ATAPI devices appearing as SCSI devices.
*
* LOCKING:
* Releases scsi-layer-held lock, and obtains host lock.
* ATA host lock
*
* RETURNS:
* Return value from __ata_scsi_queuecmd() if @cmd can be queued,
* 0 otherwise.
*/
static int ata_scsi_queuecmd_lck(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
int ata_scsi_queuecmd(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
{
struct ata_port *ap;
struct ata_device *dev;
struct scsi_device *scsidev = cmd->device;
struct Scsi_Host *shost = scsidev->host;
int rc = 0;
unsigned long irq_flags;
ap = ata_shost_to_port(shost);
spin_unlock(shost->host_lock);
spin_lock(ap->lock);
spin_lock_irqsave(ap->lock, irq_flags);
ata_scsi_dump_cdb(ap, cmd);
dev = ata_scsi_find_dev(ap, scsidev);
if (likely(dev))
rc = __ata_scsi_queuecmd(cmd, done, dev);
rc = __ata_scsi_queuecmd(cmd, cmd->scsi_done, dev);
else {
cmd->result = (DID_BAD_TARGET << 16);
done(cmd);
cmd->scsi_done(cmd);
}
spin_unlock(ap->lock);
spin_lock(shost->host_lock);
spin_unlock_irqrestore(ap->lock, irq_flags);
return rc;
}
DEF_SCSI_QCMD(ata_scsi_queuecmd)
/**
* ata_scsi_simulate - simulate SCSI command on ATA device
* @dev: the target device

View file

@ -538,7 +538,7 @@ static int vt8251_prepare_host(struct pci_dev *pdev, struct ata_host **r_host)
return 0;
}
static void svia_configure(struct pci_dev *pdev)
static void svia_configure(struct pci_dev *pdev, int board_id)
{
u8 tmp8;
@ -577,7 +577,7 @@ static void svia_configure(struct pci_dev *pdev)
}
/*
* vt6421 has problems talking to some drives. The following
* vt6420/1 has problems talking to some drives. The following
* is the fix from Joseph Chan <JosephChan@via.com.tw>.
*
* When host issues HOLD, device may send up to 20DW of data
@ -596,8 +596,9 @@ static void svia_configure(struct pci_dev *pdev)
*
* https://bugzilla.kernel.org/show_bug.cgi?id=15173
* http://article.gmane.org/gmane.linux.ide/46352
* http://thread.gmane.org/gmane.linux.kernel/1062139
*/
if (pdev->device == 0x3249) {
if (board_id == vt6420 || board_id == vt6421) {
pci_read_config_byte(pdev, 0x52, &tmp8);
tmp8 |= 1 << 2;
pci_write_config_byte(pdev, 0x52, tmp8);
@ -652,7 +653,7 @@ static int svia_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (rc)
return rc;
svia_configure(pdev);
svia_configure(pdev, board_id);
pci_set_master(pdev);
return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt,

View file

@ -150,7 +150,8 @@ static const struct intel_device_info intel_ironlake_d_info = {
static const struct intel_device_info intel_ironlake_m_info = {
.gen = 5, .is_mobile = 1,
.need_gfx_hws = 1, .has_fbc = 1, .has_rc6 = 1, .has_hotplug = 1,
.need_gfx_hws = 1, .has_rc6 = 1, .has_hotplug = 1,
.has_fbc = 0, /* disabled due to buggy hardware */
.has_bsd_ring = 1,
};

View file

@ -1045,6 +1045,8 @@ void i915_gem_clflush_object(struct drm_gem_object *obj);
int i915_gem_object_set_domain(struct drm_gem_object *obj,
uint32_t read_domains,
uint32_t write_domain);
int i915_gem_object_flush_gpu(struct drm_i915_gem_object *obj,
bool interruptible);
int i915_gem_init_ringbuffer(struct drm_device *dev);
void i915_gem_cleanup_ringbuffer(struct drm_device *dev);
int i915_gem_do_init(struct drm_device *dev, unsigned long start,

View file

@ -547,6 +547,19 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
struct drm_i915_gem_object *obj_priv;
int ret = 0;
if (args->size == 0)
return 0;
if (!access_ok(VERIFY_WRITE,
(char __user *)(uintptr_t)args->data_ptr,
args->size))
return -EFAULT;
ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret)
return -EFAULT;
ret = i915_mutex_lock_interruptible(dev);
if (ret)
return ret;
@ -564,23 +577,6 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
goto out;
}
if (args->size == 0)
goto out;
if (!access_ok(VERIFY_WRITE,
(char __user *)(uintptr_t)args->data_ptr,
args->size)) {
ret = -EFAULT;
goto out;
}
ret = fault_in_pages_writeable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret) {
ret = -EFAULT;
goto out;
}
ret = i915_gem_object_get_pages_or_evict(obj);
if (ret)
goto out;
@ -981,7 +977,20 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
struct drm_i915_gem_pwrite *args = data;
struct drm_gem_object *obj;
struct drm_i915_gem_object *obj_priv;
int ret = 0;
int ret;
if (args->size == 0)
return 0;
if (!access_ok(VERIFY_READ,
(char __user *)(uintptr_t)args->data_ptr,
args->size))
return -EFAULT;
ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret)
return -EFAULT;
ret = i915_mutex_lock_interruptible(dev);
if (ret)
@ -994,30 +1003,12 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
}
obj_priv = to_intel_bo(obj);
/* Bounds check destination. */
if (args->offset > obj->size || args->size > obj->size - args->offset) {
ret = -EINVAL;
goto out;
}
if (args->size == 0)
goto out;
if (!access_ok(VERIFY_READ,
(char __user *)(uintptr_t)args->data_ptr,
args->size)) {
ret = -EFAULT;
goto out;
}
ret = fault_in_pages_readable((char __user *)(uintptr_t)args->data_ptr,
args->size);
if (ret) {
ret = -EFAULT;
goto out;
}
/* We can only do the GTT pwrite on untiled buffers, as otherwise
* it would end up going through the fenced access, and we'll get
* different detiling behavior between reading and writing.
@ -2907,6 +2898,20 @@ i915_gem_object_set_to_display_plane(struct drm_gem_object *obj,
return 0;
}
int
i915_gem_object_flush_gpu(struct drm_i915_gem_object *obj,
bool interruptible)
{
if (!obj->active)
return 0;
if (obj->base.write_domain & I915_GEM_GPU_DOMAINS)
i915_gem_flush_ring(obj->base.dev, NULL, obj->ring,
0, obj->base.write_domain);
return i915_gem_object_wait_rendering(&obj->base, interruptible);
}
/**
* Moves a single object to the CPU read, and possibly write domain.
*

View file

@ -34,6 +34,25 @@
#include "i915_drm.h"
#include "i915_drv.h"
/* Here's the desired hotplug mode */
#define ADPA_HOTPLUG_BITS (ADPA_CRT_HOTPLUG_PERIOD_128 | \
ADPA_CRT_HOTPLUG_WARMUP_10MS | \
ADPA_CRT_HOTPLUG_SAMPLE_4S | \
ADPA_CRT_HOTPLUG_VOLTAGE_50 | \
ADPA_CRT_HOTPLUG_VOLREF_325MV | \
ADPA_CRT_HOTPLUG_ENABLE)
struct intel_crt {
struct intel_encoder base;
bool force_hotplug_required;
};
static struct intel_crt *intel_attached_crt(struct drm_connector *connector)
{
return container_of(intel_attached_encoder(connector),
struct intel_crt, base);
}
static void intel_crt_dpms(struct drm_encoder *encoder, int mode)
{
struct drm_device *dev = encoder->dev;
@ -129,7 +148,7 @@ static void intel_crt_mode_set(struct drm_encoder *encoder,
dpll_md & ~DPLL_MD_UDI_MULTIPLIER_MASK);
}
adpa = 0;
adpa = ADPA_HOTPLUG_BITS;
if (adjusted_mode->flags & DRM_MODE_FLAG_PHSYNC)
adpa |= ADPA_HSYNC_ACTIVE_HIGH;
if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)
@ -157,53 +176,44 @@ static void intel_crt_mode_set(struct drm_encoder *encoder,
static bool intel_ironlake_crt_detect_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct intel_crt *crt = intel_attached_crt(connector);
struct drm_i915_private *dev_priv = dev->dev_private;
u32 adpa, temp;
u32 adpa;
bool ret;
bool turn_off_dac = false;
temp = adpa = I915_READ(PCH_ADPA);
/* The first time through, trigger an explicit detection cycle */
if (crt->force_hotplug_required) {
bool turn_off_dac = HAS_PCH_SPLIT(dev);
u32 save_adpa;
if (HAS_PCH_SPLIT(dev))
turn_off_dac = true;
crt->force_hotplug_required = 0;
adpa &= ~ADPA_CRT_HOTPLUG_MASK;
if (turn_off_dac)
adpa &= ~ADPA_DAC_ENABLE;
save_adpa = adpa = I915_READ(PCH_ADPA);
DRM_DEBUG_KMS("trigger hotplug detect cycle: adpa=0x%x\n", adpa);
/* disable HPD first */
I915_WRITE(PCH_ADPA, adpa);
(void)I915_READ(PCH_ADPA);
adpa |= ADPA_CRT_HOTPLUG_FORCE_TRIGGER;
if (turn_off_dac)
adpa &= ~ADPA_DAC_ENABLE;
adpa |= (ADPA_CRT_HOTPLUG_PERIOD_128 |
ADPA_CRT_HOTPLUG_WARMUP_10MS |
ADPA_CRT_HOTPLUG_SAMPLE_4S |
ADPA_CRT_HOTPLUG_VOLTAGE_50 | /* default */
ADPA_CRT_HOTPLUG_VOLREF_325MV |
ADPA_CRT_HOTPLUG_ENABLE |
ADPA_CRT_HOTPLUG_FORCE_TRIGGER);
I915_WRITE(PCH_ADPA, adpa);
DRM_DEBUG_KMS("pch crt adpa 0x%x", adpa);
I915_WRITE(PCH_ADPA, adpa);
if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,
1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");
if (wait_for((I915_READ(PCH_ADPA) & ADPA_CRT_HOTPLUG_FORCE_TRIGGER) == 0,
1000))
DRM_DEBUG_KMS("timed out waiting for FORCE_TRIGGER");
if (turn_off_dac) {
/* Make sure hotplug is enabled */
I915_WRITE(PCH_ADPA, temp | ADPA_CRT_HOTPLUG_ENABLE);
(void)I915_READ(PCH_ADPA);
if (turn_off_dac) {
I915_WRITE(PCH_ADPA, save_adpa);
POSTING_READ(PCH_ADPA);
}
}
/* Check the status to see if both blue and green are on now */
adpa = I915_READ(PCH_ADPA);
adpa &= ADPA_CRT_HOTPLUG_MONITOR_MASK;
if ((adpa == ADPA_CRT_HOTPLUG_MONITOR_COLOR) ||
(adpa == ADPA_CRT_HOTPLUG_MONITOR_MONO))
if ((adpa & ADPA_CRT_HOTPLUG_MONITOR_MASK) != 0)
ret = true;
else
ret = false;
DRM_DEBUG_KMS("ironlake hotplug adpa=0x%x, result %d\n", adpa, ret);
return ret;
}
@ -277,13 +287,12 @@ static bool intel_crt_ddc_probe(struct drm_i915_private *dev_priv, int ddc_bus)
return i2c_transfer(&dev_priv->gmbus[ddc_bus].adapter, msgs, 1) == 1;
}
static bool intel_crt_detect_ddc(struct drm_encoder *encoder)
static bool intel_crt_detect_ddc(struct intel_crt *crt)
{
struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
struct drm_i915_private *dev_priv = encoder->dev->dev_private;
struct drm_i915_private *dev_priv = crt->base.base.dev->dev_private;
/* CRT should always be at 0, but check anyway */
if (intel_encoder->type != INTEL_OUTPUT_ANALOG)
if (crt->base.type != INTEL_OUTPUT_ANALOG)
return false;
if (intel_crt_ddc_probe(dev_priv, dev_priv->crt_ddc_pin)) {
@ -291,7 +300,7 @@ static bool intel_crt_detect_ddc(struct drm_encoder *encoder)
return true;
}
if (intel_ddc_probe(intel_encoder, dev_priv->crt_ddc_pin)) {
if (intel_ddc_probe(&crt->base, dev_priv->crt_ddc_pin)) {
DRM_DEBUG_KMS("CRT detected via DDC:0x50 [EDID]\n");
return true;
}
@ -300,9 +309,9 @@ static bool intel_crt_detect_ddc(struct drm_encoder *encoder)
}
static enum drm_connector_status
intel_crt_load_detect(struct drm_crtc *crtc, struct intel_encoder *intel_encoder)
intel_crt_load_detect(struct drm_crtc *crtc, struct intel_crt *crt)
{
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_encoder *encoder = &crt->base.base;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
@ -434,7 +443,7 @@ static enum drm_connector_status
intel_crt_detect(struct drm_connector *connector, bool force)
{
struct drm_device *dev = connector->dev;
struct intel_encoder *encoder = intel_attached_encoder(connector);
struct intel_crt *crt = intel_attached_crt(connector);
struct drm_crtc *crtc;
int dpms_mode;
enum drm_connector_status status;
@ -443,28 +452,31 @@ intel_crt_detect(struct drm_connector *connector, bool force)
if (intel_crt_detect_hotplug(connector)) {
DRM_DEBUG_KMS("CRT detected via hotplug\n");
return connector_status_connected;
} else
} else {
DRM_DEBUG_KMS("CRT not detected via hotplug\n");
return connector_status_disconnected;
}
}
if (intel_crt_detect_ddc(&encoder->base))
if (intel_crt_detect_ddc(crt))
return connector_status_connected;
if (!force)
return connector->status;
/* for pre-945g platforms use load detect */
if (encoder->base.crtc && encoder->base.crtc->enabled) {
status = intel_crt_load_detect(encoder->base.crtc, encoder);
crtc = crt->base.base.crtc;
if (crtc && crtc->enabled) {
status = intel_crt_load_detect(crtc, crt);
} else {
crtc = intel_get_load_detect_pipe(encoder, connector,
crtc = intel_get_load_detect_pipe(&crt->base, connector,
NULL, &dpms_mode);
if (crtc) {
if (intel_crt_detect_ddc(&encoder->base))
if (intel_crt_detect_ddc(crt))
status = connector_status_connected;
else
status = intel_crt_load_detect(crtc, encoder);
intel_release_load_detect_pipe(encoder,
status = intel_crt_load_detect(crtc, crt);
intel_release_load_detect_pipe(&crt->base,
connector, dpms_mode);
} else
status = connector_status_unknown;
@ -536,17 +548,17 @@ static const struct drm_encoder_funcs intel_crt_enc_funcs = {
void intel_crt_init(struct drm_device *dev)
{
struct drm_connector *connector;
struct intel_encoder *intel_encoder;
struct intel_crt *crt;
struct intel_connector *intel_connector;
struct drm_i915_private *dev_priv = dev->dev_private;
intel_encoder = kzalloc(sizeof(struct intel_encoder), GFP_KERNEL);
if (!intel_encoder)
crt = kzalloc(sizeof(struct intel_crt), GFP_KERNEL);
if (!crt)
return;
intel_connector = kzalloc(sizeof(struct intel_connector), GFP_KERNEL);
if (!intel_connector) {
kfree(intel_encoder);
kfree(crt);
return;
}
@ -554,20 +566,20 @@ void intel_crt_init(struct drm_device *dev)
drm_connector_init(dev, &intel_connector->base,
&intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
drm_encoder_init(dev, &intel_encoder->base, &intel_crt_enc_funcs,
drm_encoder_init(dev, &crt->base.base, &intel_crt_enc_funcs,
DRM_MODE_ENCODER_DAC);
intel_connector_attach_encoder(intel_connector, intel_encoder);
intel_connector_attach_encoder(intel_connector, &crt->base);
intel_encoder->type = INTEL_OUTPUT_ANALOG;
intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) |
(1 << INTEL_ANALOG_CLONE_BIT) |
(1 << INTEL_SDVO_LVDS_CLONE_BIT);
intel_encoder->crtc_mask = (1 << 0) | (1 << 1);
crt->base.type = INTEL_OUTPUT_ANALOG;
crt->base.clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT |
1 << INTEL_ANALOG_CLONE_BIT |
1 << INTEL_SDVO_LVDS_CLONE_BIT);
crt->base.crtc_mask = (1 << 0) | (1 << 1);
connector->interlace_allowed = 1;
connector->doublescan_allowed = 0;
drm_encoder_helper_add(&intel_encoder->base, &intel_crt_helper_funcs);
drm_encoder_helper_add(&crt->base.base, &intel_crt_helper_funcs);
drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
drm_sysfs_connector_add(connector);
@ -577,5 +589,22 @@ void intel_crt_init(struct drm_device *dev)
else
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/*
* Configure the automatic hotplug detection stuff
*/
crt->force_hotplug_required = 0;
if (HAS_PCH_SPLIT(dev)) {
u32 adpa;
adpa = I915_READ(PCH_ADPA);
adpa &= ~ADPA_CRT_HOTPLUG_MASK;
adpa |= ADPA_HOTPLUG_BITS;
I915_WRITE(PCH_ADPA, adpa);
POSTING_READ(PCH_ADPA);
DRM_DEBUG_KMS("pch crt adpa set to 0x%x\n", adpa);
crt->force_hotplug_required = 1;
}
dev_priv->hotplug_supported_mask |= CRT_HOTPLUG_INT_STATUS;
}

View file

@ -1611,6 +1611,18 @@ intel_pipe_set_base(struct drm_crtc *crtc, int x, int y,
wait_event(dev_priv->pending_flip_queue,
atomic_read(&obj_priv->pending_flip) == 0);
/* Big Hammer, we also need to ensure that any pending
* MI_WAIT_FOR_EVENT inside a user batch buffer on the
* current scanout is retired before unpinning the old
* framebuffer.
*/
ret = i915_gem_object_flush_gpu(obj_priv, false);
if (ret) {
i915_gem_object_unpin(to_intel_framebuffer(crtc->fb)->obj);
mutex_unlock(&dev->struct_mutex);
return ret;
}
}
ret = intel_pipe_set_base_atomic(crtc, crtc->fb, x, y,

View file

@ -160,7 +160,7 @@ intel_gpio_create(struct drm_i915_private *dev_priv, u32 pin)
};
struct intel_gpio *gpio;
if (pin < 1 || pin > 7)
if (pin >= ARRAY_SIZE(map_pin_to_reg) || !map_pin_to_reg[pin])
return NULL;
gpio = kzalloc(sizeof(struct intel_gpio), GFP_KERNEL);
@ -172,7 +172,8 @@ intel_gpio_create(struct drm_i915_private *dev_priv, u32 pin)
gpio->reg += PCH_GPIOA - GPIOA;
gpio->dev_priv = dev_priv;
snprintf(gpio->adapter.name, I2C_NAME_SIZE, "GPIO%c", "?BACDEF?"[pin]);
snprintf(gpio->adapter.name, sizeof(gpio->adapter.name),
"i915 GPIO%c", "?BACDE?F"[pin]);
gpio->adapter.owner = THIS_MODULE;
gpio->adapter.algo_data = &gpio->algo;
gpio->adapter.dev.parent = &dev_priv->dev->pdev->dev;
@ -349,7 +350,7 @@ int intel_setup_gmbus(struct drm_device *dev)
"panel",
"dpc",
"dpb",
"reserved"
"reserved",
"dpd",
};
struct drm_i915_private *dev_priv = dev->dev_private;
@ -366,8 +367,8 @@ int intel_setup_gmbus(struct drm_device *dev)
bus->adapter.owner = THIS_MODULE;
bus->adapter.class = I2C_CLASS_DDC;
snprintf(bus->adapter.name,
I2C_NAME_SIZE,
"gmbus %s",
sizeof(bus->adapter.name),
"i915 gmbus %s",
names[i]);
bus->adapter.dev.parent = &dev->pdev->dev;

View file

@ -31,6 +31,7 @@
*/
#include <linux/backlight.h>
#include <linux/acpi.h>
#include "drmP.h"
#include "nouveau_drv.h"
@ -136,6 +137,14 @@ int nouveau_backlight_init(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
#ifdef CONFIG_ACPI
if (acpi_video_backlight_support()) {
NV_INFO(dev, "ACPI backlight interface available, "
"not registering our own\n");
return 0;
}
#endif
switch (dev_priv->card_type) {
case NV_40:
return nouveau_nv40_backlight_init(dev);

View file

@ -6829,7 +6829,7 @@ nouveau_bios_posted(struct drm_device *dev)
struct drm_nouveau_private *dev_priv = dev->dev_private;
unsigned htotal;
if (dev_priv->chipset >= NV_50) {
if (dev_priv->card_type >= NV_50) {
if (NVReadVgaCrtc(dev, 0, 0x00) == 0 &&
NVReadVgaCrtc(dev, 0, 0x1a) == 0)
return false;

View file

@ -143,8 +143,10 @@ nouveau_bo_new(struct drm_device *dev, struct nouveau_channel *chan,
nvbo->no_vm = no_vm;
nvbo->tile_mode = tile_mode;
nvbo->tile_flags = tile_flags;
nvbo->bo.bdev = &dev_priv->ttm.bdev;
nouveau_bo_fixup_align(dev, tile_mode, tile_flags, &align, &size);
nouveau_bo_fixup_align(dev, tile_mode, nouveau_bo_tile_layout(nvbo),
&align, &size);
align >>= PAGE_SHIFT;
nouveau_bo_placement_set(nvbo, flags, 0);
@ -176,6 +178,31 @@ set_placement_list(uint32_t *pl, unsigned *n, uint32_t type, uint32_t flags)
pl[(*n)++] = TTM_PL_FLAG_SYSTEM | flags;
}
static void
set_placement_range(struct nouveau_bo *nvbo, uint32_t type)
{
struct drm_nouveau_private *dev_priv = nouveau_bdev(nvbo->bo.bdev);
if (dev_priv->card_type == NV_10 &&
nvbo->tile_mode && (type & TTM_PL_FLAG_VRAM)) {
/*
* Make sure that the color and depth buffers are handled
* by independent memory controller units. Up to a 9x
* speed up when alpha-blending and depth-test are enabled
* at the same time.
*/
int vram_pages = dev_priv->vram_size >> PAGE_SHIFT;
if (nvbo->tile_flags & NOUVEAU_GEM_TILE_ZETA) {
nvbo->placement.fpfn = vram_pages / 2;
nvbo->placement.lpfn = ~0;
} else {
nvbo->placement.fpfn = 0;
nvbo->placement.lpfn = vram_pages / 2;
}
}
}
void
nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t type, uint32_t busy)
{
@ -190,6 +217,8 @@ nouveau_bo_placement_set(struct nouveau_bo *nvbo, uint32_t type, uint32_t busy)
pl->busy_placement = nvbo->busy_placements;
set_placement_list(nvbo->busy_placements, &pl->num_busy_placement,
type | busy, flags);
set_placement_range(nvbo, type);
}
int
@ -525,7 +554,8 @@ nv50_bo_move_m2mf(struct nouveau_channel *chan, struct ttm_buffer_object *bo,
stride = 16 * 4;
height = amount / stride;
if (new_mem->mem_type == TTM_PL_VRAM && nvbo->tile_flags) {
if (new_mem->mem_type == TTM_PL_VRAM &&
nouveau_bo_tile_layout(nvbo)) {
ret = RING_SPACE(chan, 8);
if (ret)
return ret;
@ -546,7 +576,8 @@ nv50_bo_move_m2mf(struct nouveau_channel *chan, struct ttm_buffer_object *bo,
BEGIN_RING(chan, NvSubM2MF, 0x0200, 1);
OUT_RING (chan, 1);
}
if (old_mem->mem_type == TTM_PL_VRAM && nvbo->tile_flags) {
if (old_mem->mem_type == TTM_PL_VRAM &&
nouveau_bo_tile_layout(nvbo)) {
ret = RING_SPACE(chan, 8);
if (ret)
return ret;
@ -753,7 +784,8 @@ nouveau_bo_vm_bind(struct ttm_buffer_object *bo, struct ttm_mem_reg *new_mem,
if (dev_priv->card_type == NV_50) {
ret = nv50_mem_vm_bind_linear(dev,
offset + dev_priv->vm_vram_base,
new_mem->size, nvbo->tile_flags,
new_mem->size,
nouveau_bo_tile_layout(nvbo),
offset);
if (ret)
return ret;
@ -894,7 +926,8 @@ nouveau_ttm_fault_reserve_notify(struct ttm_buffer_object *bo)
* nothing to do here.
*/
if (bo->mem.mem_type != TTM_PL_VRAM) {
if (dev_priv->card_type < NV_50 || !nvbo->tile_flags)
if (dev_priv->card_type < NV_50 ||
!nouveau_bo_tile_layout(nvbo))
return 0;
}

View file

@ -281,7 +281,7 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
nv_encoder = find_encoder_by_type(connector, OUTPUT_ANALOG);
if (!nv_encoder && !nouveau_tv_disable)
nv_encoder = find_encoder_by_type(connector, OUTPUT_TV);
if (nv_encoder) {
if (nv_encoder && force) {
struct drm_encoder *encoder = to_drm_encoder(nv_encoder);
struct drm_encoder_helper_funcs *helper =
encoder->helper_private;
@ -641,11 +641,28 @@ nouveau_connector_get_modes(struct drm_connector *connector)
return ret;
}
static unsigned
get_tmds_link_bandwidth(struct drm_connector *connector)
{
struct nouveau_connector *nv_connector = nouveau_connector(connector);
struct drm_nouveau_private *dev_priv = connector->dev->dev_private;
struct dcb_entry *dcb = nv_connector->detected_encoder->dcb;
if (dcb->location != DCB_LOC_ON_CHIP ||
dev_priv->chipset >= 0x46)
return 165000;
else if (dev_priv->chipset >= 0x40)
return 155000;
else if (dev_priv->chipset >= 0x18)
return 135000;
else
return 112000;
}
static int
nouveau_connector_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
struct drm_nouveau_private *dev_priv = connector->dev->dev_private;
struct nouveau_connector *nv_connector = nouveau_connector(connector);
struct nouveau_encoder *nv_encoder = nv_connector->detected_encoder;
struct drm_encoder *encoder = to_drm_encoder(nv_encoder);
@ -663,11 +680,9 @@ nouveau_connector_mode_valid(struct drm_connector *connector,
max_clock = 400000;
break;
case OUTPUT_TMDS:
if ((dev_priv->card_type >= NV_50 && !nouveau_duallink) ||
!nv_encoder->dcb->duallink_possible)
max_clock = 165000;
else
max_clock = 330000;
max_clock = get_tmds_link_bandwidth(connector);
if (nouveau_duallink && nv_encoder->dcb->duallink_possible)
max_clock *= 2;
break;
case OUTPUT_ANALOG:
max_clock = nv_encoder->dcb->crtconf.maxfreq;
@ -709,44 +724,6 @@ nouveau_connector_best_encoder(struct drm_connector *connector)
return NULL;
}
void
nouveau_connector_set_polling(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct drm_crtc *crtc;
bool spare_crtc = false;
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
spare_crtc |= !crtc->enabled;
connector->polled = 0;
switch (connector->connector_type) {
case DRM_MODE_CONNECTOR_VGA:
case DRM_MODE_CONNECTOR_TV:
if (dev_priv->card_type >= NV_50 ||
(nv_gf4_disp_arch(dev) && spare_crtc))
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
break;
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_DisplayPort:
case DRM_MODE_CONNECTOR_eDP:
if (dev_priv->card_type >= NV_50)
connector->polled = DRM_CONNECTOR_POLL_HPD;
else if (connector->connector_type == DRM_MODE_CONNECTOR_DVID ||
spare_crtc)
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
break;
default:
break;
}
}
static const struct drm_connector_helper_funcs
nouveau_connector_helper_funcs = {
.get_modes = nouveau_connector_get_modes,
@ -872,6 +849,7 @@ nouveau_connector_create(struct drm_device *dev, int index)
dev->mode_config.scaling_mode_property,
nv_connector->scaling_mode);
}
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
/* fall-through */
case DCB_CONNECTOR_TV_0:
case DCB_CONNECTOR_TV_1:
@ -888,11 +866,16 @@ nouveau_connector_create(struct drm_device *dev, int index)
dev->mode_config.dithering_mode_property,
nv_connector->use_dithering ?
DRM_MODE_DITHERING_ON : DRM_MODE_DITHERING_OFF);
if (dcb->type != DCB_CONNECTOR_LVDS) {
if (dev_priv->card_type >= NV_50)
connector->polled = DRM_CONNECTOR_POLL_HPD;
else
connector->polled = DRM_CONNECTOR_POLL_CONNECT;
}
break;
}
nouveau_connector_set_polling(connector);
drm_sysfs_connector_add(connector);
dcb->drm = connector;
return dcb->drm;

View file

@ -52,9 +52,6 @@ static inline struct nouveau_connector *nouveau_connector(
struct drm_connector *
nouveau_connector_create(struct drm_device *, int index);
void
nouveau_connector_set_polling(struct drm_connector *);
int
nouveau_connector_bpp(struct drm_connector *);

View file

@ -100,6 +100,9 @@ struct nouveau_bo {
int pin_refcnt;
};
#define nouveau_bo_tile_layout(nvbo) \
((nvbo)->tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK)
static inline struct nouveau_bo *
nouveau_bo(struct ttm_buffer_object *bo)
{
@ -304,6 +307,7 @@ struct nouveau_fifo_engine {
void (*destroy_context)(struct nouveau_channel *);
int (*load_context)(struct nouveau_channel *);
int (*unload_context)(struct drm_device *);
void (*tlb_flush)(struct drm_device *dev);
};
struct nouveau_pgraph_object_method {
@ -336,6 +340,7 @@ struct nouveau_pgraph_engine {
void (*destroy_context)(struct nouveau_channel *);
int (*load_context)(struct nouveau_channel *);
int (*unload_context)(struct drm_device *);
void (*tlb_flush)(struct drm_device *dev);
void (*set_region_tiling)(struct drm_device *dev, int i, uint32_t addr,
uint32_t size, uint32_t pitch);
@ -485,13 +490,13 @@ enum nv04_fp_display_regs {
};
struct nv04_crtc_reg {
unsigned char MiscOutReg; /* */
unsigned char MiscOutReg;
uint8_t CRTC[0xa0];
uint8_t CR58[0x10];
uint8_t Sequencer[5];
uint8_t Graphics[9];
uint8_t Attribute[21];
unsigned char DAC[768]; /* Internal Colorlookuptable */
unsigned char DAC[768];
/* PCRTC regs */
uint32_t fb_start;
@ -539,43 +544,9 @@ struct nv04_output_reg {
};
struct nv04_mode_state {
uint32_t bpp;
uint32_t width;
uint32_t height;
uint32_t interlace;
uint32_t repaint0;
uint32_t repaint1;
uint32_t screen;
uint32_t scale;
uint32_t dither;
uint32_t extra;
uint32_t fifo;
uint32_t pixel;
uint32_t horiz;
int arbitration0;
int arbitration1;
uint32_t pll;
uint32_t pllB;
uint32_t vpll;
uint32_t vpll2;
uint32_t vpllB;
uint32_t vpll2B;
struct nv04_crtc_reg crtc_reg[2];
uint32_t pllsel;
uint32_t sel_clk;
uint32_t general;
uint32_t crtcOwner;
uint32_t head;
uint32_t head2;
uint32_t cursorConfig;
uint32_t cursor0;
uint32_t cursor1;
uint32_t cursor2;
uint32_t timingH;
uint32_t timingV;
uint32_t displayV;
uint32_t crtcSync;
struct nv04_crtc_reg crtc_reg[2];
};
enum nouveau_card_type {
@ -613,6 +584,12 @@ struct drm_nouveau_private {
struct work_struct irq_work;
struct work_struct hpd_work;
struct {
spinlock_t lock;
uint32_t hpd0_bits;
uint32_t hpd1_bits;
} hpd_state;
struct list_head vbl_waiting;
struct {
@ -1045,6 +1022,7 @@ extern int nv50_fifo_create_context(struct nouveau_channel *);
extern void nv50_fifo_destroy_context(struct nouveau_channel *);
extern int nv50_fifo_load_context(struct nouveau_channel *);
extern int nv50_fifo_unload_context(struct drm_device *);
extern void nv50_fifo_tlb_flush(struct drm_device *dev);
/* nvc0_fifo.c */
extern int nvc0_fifo_init(struct drm_device *);
@ -1122,6 +1100,8 @@ extern int nv50_graph_load_context(struct nouveau_channel *);
extern int nv50_graph_unload_context(struct drm_device *);
extern void nv50_graph_context_switch(struct drm_device *);
extern int nv50_grctx_init(struct nouveau_grctx *);
extern void nv50_graph_tlb_flush(struct drm_device *dev);
extern void nv86_graph_tlb_flush(struct drm_device *dev);
/* nvc0_graph.c */
extern int nvc0_graph_init(struct drm_device *);
@ -1239,7 +1219,6 @@ extern u16 nouveau_bo_rd16(struct nouveau_bo *nvbo, unsigned index);
extern void nouveau_bo_wr16(struct nouveau_bo *nvbo, unsigned index, u16 val);
extern u32 nouveau_bo_rd32(struct nouveau_bo *nvbo, unsigned index);
extern void nouveau_bo_wr32(struct nouveau_bo *nvbo, unsigned index, u32 val);
extern int nouveau_bo_sync_gpu(struct nouveau_bo *, struct nouveau_channel *);
/* nouveau_fence.c */
struct nouveau_fence;

View file

@ -249,6 +249,7 @@ alloc_semaphore(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_semaphore *sema;
int ret;
if (!USE_SEMA(dev))
return NULL;
@ -257,10 +258,14 @@ alloc_semaphore(struct drm_device *dev)
if (!sema)
goto fail;
ret = drm_mm_pre_get(&dev_priv->fence.heap);
if (ret)
goto fail;
spin_lock(&dev_priv->fence.lock);
sema->mem = drm_mm_search_free(&dev_priv->fence.heap, 4, 0, 0);
if (sema->mem)
sema->mem = drm_mm_get_block(sema->mem, 4, 0);
sema->mem = drm_mm_get_block_atomic(sema->mem, 4, 0);
spin_unlock(&dev_priv->fence.lock);
if (!sema->mem)

View file

@ -107,23 +107,29 @@ nouveau_gem_info(struct drm_gem_object *gem, struct drm_nouveau_gem_info *rep)
}
static bool
nouveau_gem_tile_flags_valid(struct drm_device *dev, uint32_t tile_flags) {
switch (tile_flags) {
case 0x0000:
case 0x1800:
case 0x2800:
case 0x4800:
case 0x7000:
case 0x7400:
case 0x7a00:
case 0xe000:
break;
default:
NV_ERROR(dev, "bad page flags: 0x%08x\n", tile_flags);
return false;
nouveau_gem_tile_flags_valid(struct drm_device *dev, uint32_t tile_flags)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
if (dev_priv->card_type >= NV_50) {
switch (tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK) {
case 0x0000:
case 0x1800:
case 0x2800:
case 0x4800:
case 0x7000:
case 0x7400:
case 0x7a00:
case 0xe000:
return true;
}
} else {
if (!(tile_flags & NOUVEAU_GEM_TILE_LAYOUT_MASK))
return true;
}
return true;
NV_ERROR(dev, "bad page flags: 0x%08x\n", tile_flags);
return false;
}
int

View file

@ -519,11 +519,11 @@ nouveau_hw_fix_bad_vpll(struct drm_device *dev, int head)
struct pll_lims pll_lim;
struct nouveau_pll_vals pv;
uint32_t pllreg = head ? NV_RAMDAC_VPLL2 : NV_PRAMDAC_VPLL_COEFF;
enum pll_types pll = head ? PLL_VPLL1 : PLL_VPLL0;
if (get_pll_limits(dev, pllreg, &pll_lim))
if (get_pll_limits(dev, pll, &pll_lim))
return;
nouveau_hw_get_pllvals(dev, pllreg, &pv);
nouveau_hw_get_pllvals(dev, pll, &pv);
if (pv.M1 >= pll_lim.vco1.min_m && pv.M1 <= pll_lim.vco1.max_m &&
pv.N1 >= pll_lim.vco1.min_n && pv.N1 <= pll_lim.vco1.max_n &&
@ -536,7 +536,7 @@ nouveau_hw_fix_bad_vpll(struct drm_device *dev, int head)
pv.M1 = pll_lim.vco1.max_m;
pv.N1 = pll_lim.vco1.min_n;
pv.log2P = pll_lim.max_usable_log2p;
nouveau_hw_setpll(dev, pllreg, &pv);
nouveau_hw_setpll(dev, pll_lim.reg, &pv);
}
/*

View file

@ -415,6 +415,25 @@ nv_fix_nv40_hw_cursor(struct drm_device *dev, int head)
NVWriteRAMDAC(dev, head, NV_PRAMDAC_CU_START_POS, curpos);
}
static inline void
nv_set_crtc_base(struct drm_device *dev, int head, uint32_t offset)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
NVWriteCRTC(dev, head, NV_PCRTC_START, offset);
if (dev_priv->card_type == NV_04) {
/*
* Hilarious, the 24th bit doesn't want to stick to
* PCRTC_START...
*/
int cre_heb = NVReadVgaCrtc(dev, head, NV_CIO_CRE_HEB__INDEX);
NVWriteVgaCrtc(dev, head, NV_CIO_CRE_HEB__INDEX,
(cre_heb & ~0x40) | ((offset >> 18) & 0x40));
}
}
static inline void
nv_show_cursor(struct drm_device *dev, int head, bool show)
{

View file

@ -256,7 +256,7 @@ nouveau_i2c_find(struct drm_device *dev, int index)
if (index >= DCB_MAX_NUM_I2C_ENTRIES)
return NULL;
if (dev_priv->chipset >= NV_50 && (i2c->entry & 0x00000100)) {
if (dev_priv->card_type >= NV_50 && (i2c->entry & 0x00000100)) {
uint32_t reg = 0xe500, val;
if (i2c->port_type == 6) {

View file

@ -42,6 +42,13 @@
#include "nouveau_connector.h"
#include "nv50_display.h"
static DEFINE_RATELIMIT_STATE(nouveau_ratelimit_state, 3 * HZ, 20);
static int nouveau_ratelimit(void)
{
return __ratelimit(&nouveau_ratelimit_state);
}
void
nouveau_irq_preinstall(struct drm_device *dev)
{
@ -53,6 +60,7 @@ nouveau_irq_preinstall(struct drm_device *dev)
if (dev_priv->card_type >= NV_50) {
INIT_WORK(&dev_priv->irq_work, nv50_display_irq_handler_bh);
INIT_WORK(&dev_priv->hpd_work, nv50_display_irq_hotplug_bh);
spin_lock_init(&dev_priv->hpd_state.lock);
INIT_LIST_HEAD(&dev_priv->vbl_waiting);
}
}
@ -202,8 +210,8 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
}
if (status & NV_PFIFO_INTR_DMA_PUSHER) {
u32 get = nv_rd32(dev, 0x003244);
u32 put = nv_rd32(dev, 0x003240);
u32 dma_get = nv_rd32(dev, 0x003244);
u32 dma_put = nv_rd32(dev, 0x003240);
u32 push = nv_rd32(dev, 0x003220);
u32 state = nv_rd32(dev, 0x003228);
@ -213,16 +221,18 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
u32 ib_get = nv_rd32(dev, 0x003334);
u32 ib_put = nv_rd32(dev, 0x003330);
NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%02x%08x "
if (nouveau_ratelimit())
NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%02x%08x "
"Put 0x%02x%08x IbGet 0x%08x IbPut 0x%08x "
"State 0x%08x Push 0x%08x\n",
chid, ho_get, get, ho_put, put, ib_get, ib_put,
state, push);
chid, ho_get, dma_get, ho_put,
dma_put, ib_get, ib_put, state,
push);
/* METHOD_COUNT, in DMA_STATE on earlier chipsets */
nv_wr32(dev, 0x003364, 0x00000000);
if (get != put || ho_get != ho_put) {
nv_wr32(dev, 0x003244, put);
if (dma_get != dma_put || ho_get != ho_put) {
nv_wr32(dev, 0x003244, dma_put);
nv_wr32(dev, 0x003328, ho_put);
} else
if (ib_get != ib_put) {
@ -231,10 +241,10 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
} else {
NV_INFO(dev, "PFIFO_DMA_PUSHER - Ch %d Get 0x%08x "
"Put 0x%08x State 0x%08x Push 0x%08x\n",
chid, get, put, state, push);
chid, dma_get, dma_put, state, push);
if (get != put)
nv_wr32(dev, 0x003244, put);
if (dma_get != dma_put)
nv_wr32(dev, 0x003244, dma_put);
}
nv_wr32(dev, 0x003228, 0x00000000);
@ -266,8 +276,9 @@ nouveau_fifo_irq_handler(struct drm_device *dev)
}
if (status) {
NV_INFO(dev, "PFIFO_INTR 0x%08x - Ch %d\n",
status, chid);
if (nouveau_ratelimit())
NV_INFO(dev, "PFIFO_INTR 0x%08x - Ch %d\n",
status, chid);
nv_wr32(dev, NV03_PFIFO_INTR_0, status);
status = 0;
}
@ -544,13 +555,6 @@ nouveau_pgraph_intr_notify(struct drm_device *dev, uint32_t nsource)
nouveau_graph_dump_trap_info(dev, "PGRAPH_NOTIFY", &trap);
}
static DEFINE_RATELIMIT_STATE(nouveau_ratelimit_state, 3 * HZ, 20);
static int nouveau_ratelimit(void)
{
return __ratelimit(&nouveau_ratelimit_state);
}
static inline void
nouveau_pgraph_intr_error(struct drm_device *dev, uint32_t nsource)

View file

@ -33,9 +33,9 @@
#include "drmP.h"
#include "drm.h"
#include "drm_sarea.h"
#include "nouveau_drv.h"
#define MIN(a,b) a < b ? a : b
#include "nouveau_drv.h"
#include "nouveau_pm.h"
/*
* NV10-NV40 tiling helpers
@ -175,11 +175,10 @@ nv50_mem_vm_bind_linear(struct drm_device *dev, uint64_t virt, uint32_t size,
}
}
}
dev_priv->engine.instmem.flush(dev);
nv50_vm_flush(dev, 5);
nv50_vm_flush(dev, 0);
nv50_vm_flush(dev, 4);
dev_priv->engine.instmem.flush(dev);
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
nv50_vm_flush(dev, 6);
return 0;
}
@ -209,11 +208,10 @@ nv50_mem_vm_unbind(struct drm_device *dev, uint64_t virt, uint32_t size)
pte++;
}
}
dev_priv->engine.instmem.flush(dev);
nv50_vm_flush(dev, 5);
nv50_vm_flush(dev, 0);
nv50_vm_flush(dev, 4);
dev_priv->engine.instmem.flush(dev);
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
nv50_vm_flush(dev, 6);
}
@ -653,6 +651,7 @@ nouveau_mem_gart_init(struct drm_device *dev)
void
nouveau_mem_timing_init(struct drm_device *dev)
{
/* cards < NVC0 only */
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
struct nouveau_pm_memtimings *memtimings = &pm->memtimings;
@ -719,14 +718,14 @@ nouveau_mem_timing_init(struct drm_device *dev)
tUNK_19 = 1;
tUNK_20 = 0;
tUNK_21 = 0;
switch (MIN(recordlen,21)) {
case 21:
switch (min(recordlen, 22)) {
case 22:
tUNK_21 = entry[21];
case 20:
case 21:
tUNK_20 = entry[20];
case 19:
case 20:
tUNK_19 = entry[19];
case 18:
case 19:
tUNK_18 = entry[18];
default:
tUNK_0 = entry[0];
@ -756,24 +755,30 @@ nouveau_mem_timing_init(struct drm_device *dev)
timing->reg_100228 = (tUNK_12 << 16 | tUNK_11 << 8 | tUNK_10);
if(recordlen > 19) {
timing->reg_100228 += (tUNK_19 - 1) << 24;
} else {
}/* I cannot back-up this else-statement right now
else {
timing->reg_100228 += tUNK_12 << 24;
}
}*/
/* XXX: reg_10022c */
timing->reg_10022c = tUNK_2 - 1;
timing->reg_100230 = (tUNK_20 << 24 | tUNK_21 << 16 |
tUNK_13 << 8 | tUNK_13);
/* XXX: +6? */
timing->reg_100234 = (tRAS << 24 | (tUNK_19 + 6) << 8 | tRC);
if(tUNK_10 > tUNK_11) {
timing->reg_100234 += tUNK_10 << 16;
} else {
timing->reg_100234 += tUNK_11 << 16;
timing->reg_100234 += max(tUNK_10,tUNK_11) << 16;
/* XXX; reg_100238, reg_10023c
* reg: 0x00??????
* reg_10023c:
* 0 for pre-NV50 cards
* 0x????0202 for NV50+ cards (empirical evidence) */
if(dev_priv->card_type >= NV_50) {
timing->reg_10023c = 0x202;
}
/* XXX; reg_100238, reg_10023c */
NV_DEBUG(dev, "Entry %d: 220: %08x %08x %08x %08x\n", i,
timing->reg_100220, timing->reg_100224,
timing->reg_100228, timing->reg_10022c);

View file

@ -129,7 +129,7 @@ nouveau_gpuobj_new(struct drm_device *dev, struct nouveau_channel *chan,
if (ramin == NULL) {
spin_unlock(&dev_priv->ramin_lock);
nouveau_gpuobj_ref(NULL, &gpuobj);
return ret;
return -ENOMEM;
}
ramin = drm_mm_get_block_atomic(ramin, size, align);

View file

@ -284,6 +284,7 @@ nouveau_sysfs_fini(struct drm_device *dev)
}
}
#ifdef CONFIG_HWMON
static ssize_t
nouveau_hwmon_show_temp(struct device *d, struct device_attribute *a, char *buf)
{
@ -395,10 +396,12 @@ static struct attribute *hwmon_attributes[] = {
static const struct attribute_group hwmon_attrgroup = {
.attrs = hwmon_attributes,
};
#endif
static int
nouveau_hwmon_init(struct drm_device *dev)
{
#ifdef CONFIG_HWMON
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
struct device *hwmon_dev;
@ -425,13 +428,14 @@ nouveau_hwmon_init(struct drm_device *dev)
}
pm->hwmon = hwmon_dev;
#endif
return 0;
}
static void
nouveau_hwmon_fini(struct drm_device *dev)
{
#ifdef CONFIG_HWMON
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_pm_engine *pm = &dev_priv->engine.pm;
@ -439,6 +443,7 @@ nouveau_hwmon_fini(struct drm_device *dev)
sysfs_remove_group(&pm->hwmon->kobj, &hwmon_attrgroup);
hwmon_device_unregister(pm->hwmon);
}
#endif
}
int

View file

@ -153,26 +153,42 @@ nouveau_ramht_insert(struct nouveau_channel *chan, u32 handle,
return -ENOMEM;
}
static struct nouveau_ramht_entry *
nouveau_ramht_remove_entry(struct nouveau_channel *chan, u32 handle)
{
struct nouveau_ramht *ramht = chan ? chan->ramht : NULL;
struct nouveau_ramht_entry *entry;
unsigned long flags;
if (!ramht)
return NULL;
spin_lock_irqsave(&ramht->lock, flags);
list_for_each_entry(entry, &ramht->entries, head) {
if (entry->channel == chan &&
(!handle || entry->handle == handle)) {
list_del(&entry->head);
spin_unlock_irqrestore(&ramht->lock, flags);
return entry;
}
}
spin_unlock_irqrestore(&ramht->lock, flags);
return NULL;
}
static void
nouveau_ramht_remove_locked(struct nouveau_channel *chan, u32 handle)
nouveau_ramht_remove_hash(struct nouveau_channel *chan, u32 handle)
{
struct drm_device *dev = chan->dev;
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_instmem_engine *instmem = &dev_priv->engine.instmem;
struct nouveau_gpuobj *ramht = chan->ramht->gpuobj;
struct nouveau_ramht_entry *entry, *tmp;
unsigned long flags;
u32 co, ho;
list_for_each_entry_safe(entry, tmp, &chan->ramht->entries, head) {
if (entry->channel != chan || entry->handle != handle)
continue;
nouveau_gpuobj_ref(NULL, &entry->gpuobj);
list_del(&entry->head);
kfree(entry);
break;
}
spin_lock_irqsave(&chan->ramht->lock, flags);
co = ho = nouveau_ramht_hash_handle(chan, handle);
do {
if (nouveau_ramht_entry_valid(dev, ramht, co) &&
@ -184,7 +200,7 @@ nouveau_ramht_remove_locked(struct nouveau_channel *chan, u32 handle)
nv_wo32(ramht, co + 0, 0x00000000);
nv_wo32(ramht, co + 4, 0x00000000);
instmem->flush(dev);
return;
goto out;
}
co += 8;
@ -194,17 +210,22 @@ nouveau_ramht_remove_locked(struct nouveau_channel *chan, u32 handle)
NV_ERROR(dev, "RAMHT entry not found. ch=%d, handle=0x%08x\n",
chan->id, handle);
out:
spin_unlock_irqrestore(&chan->ramht->lock, flags);
}
void
nouveau_ramht_remove(struct nouveau_channel *chan, u32 handle)
{
struct nouveau_ramht *ramht = chan->ramht;
unsigned long flags;
struct nouveau_ramht_entry *entry;
spin_lock_irqsave(&ramht->lock, flags);
nouveau_ramht_remove_locked(chan, handle);
spin_unlock_irqrestore(&ramht->lock, flags);
entry = nouveau_ramht_remove_entry(chan, handle);
if (!entry)
return;
nouveau_ramht_remove_hash(chan, entry->handle);
nouveau_gpuobj_ref(NULL, &entry->gpuobj);
kfree(entry);
}
struct nouveau_gpuobj *
@ -265,23 +286,19 @@ void
nouveau_ramht_ref(struct nouveau_ramht *ref, struct nouveau_ramht **ptr,
struct nouveau_channel *chan)
{
struct nouveau_ramht_entry *entry, *tmp;
struct nouveau_ramht_entry *entry;
struct nouveau_ramht *ramht;
unsigned long flags;
if (ref)
kref_get(&ref->refcount);
ramht = *ptr;
if (ramht) {
spin_lock_irqsave(&ramht->lock, flags);
list_for_each_entry_safe(entry, tmp, &ramht->entries, head) {
if (entry->channel != chan)
continue;
nouveau_ramht_remove_locked(chan, entry->handle);
while ((entry = nouveau_ramht_remove_entry(chan, 0))) {
nouveau_ramht_remove_hash(chan, entry->handle);
nouveau_gpuobj_ref(NULL, &entry->gpuobj);
kfree(entry);
}
spin_unlock_irqrestore(&ramht->lock, flags);
kref_put(&ramht->refcount, nouveau_ramht_del);
}

View file

@ -120,8 +120,8 @@ nouveau_sgdma_bind(struct ttm_backend *be, struct ttm_mem_reg *mem)
dev_priv->engine.instmem.flush(nvbe->dev);
if (dev_priv->card_type == NV_50) {
nv50_vm_flush(dev, 5); /* PGRAPH */
nv50_vm_flush(dev, 0); /* PFIFO */
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
}
nvbe->bound = true;
@ -162,8 +162,8 @@ nouveau_sgdma_unbind(struct ttm_backend *be)
dev_priv->engine.instmem.flush(nvbe->dev);
if (dev_priv->card_type == NV_50) {
nv50_vm_flush(dev, 5);
nv50_vm_flush(dev, 0);
dev_priv->engine.fifo.tlb_flush(dev);
dev_priv->engine.graph.tlb_flush(dev);
}
nvbe->bound = false;
@ -224,7 +224,11 @@ nouveau_sgdma_init(struct drm_device *dev)
int i, ret;
if (dev_priv->card_type < NV_50) {
aper_size = (64 * 1024 * 1024);
if(dev_priv->ramin_rsvd_vram < 2 * 1024 * 1024)
aper_size = 64 * 1024 * 1024;
else
aper_size = 512 * 1024 * 1024;
obj_size = (aper_size >> NV_CTXDMA_PAGE_SHIFT) * 4;
obj_size += 8; /* ctxdma header */
} else {

View file

@ -354,6 +354,15 @@ static int nouveau_init_engine_ptrs(struct drm_device *dev)
engine->graph.destroy_context = nv50_graph_destroy_context;
engine->graph.load_context = nv50_graph_load_context;
engine->graph.unload_context = nv50_graph_unload_context;
if (dev_priv->chipset != 0x86)
engine->graph.tlb_flush = nv50_graph_tlb_flush;
else {
/* from what i can see nvidia do this on every
* pre-NVA3 board except NVAC, but, we've only
* ever seen problems on NV86
*/
engine->graph.tlb_flush = nv86_graph_tlb_flush;
}
engine->fifo.channels = 128;
engine->fifo.init = nv50_fifo_init;
engine->fifo.takedown = nv50_fifo_takedown;
@ -365,6 +374,7 @@ static int nouveau_init_engine_ptrs(struct drm_device *dev)
engine->fifo.destroy_context = nv50_fifo_destroy_context;
engine->fifo.load_context = nv50_fifo_load_context;
engine->fifo.unload_context = nv50_fifo_unload_context;
engine->fifo.tlb_flush = nv50_fifo_tlb_flush;
engine->display.early_init = nv50_display_early_init;
engine->display.late_takedown = nv50_display_late_takedown;
engine->display.create = nv50_display_create;
@ -1041,6 +1051,9 @@ int nouveau_ioctl_getparam(struct drm_device *dev, void *data,
case NOUVEAU_GETPARAM_PTIMER_TIME:
getparam->value = dev_priv->engine.timer.read(dev);
break;
case NOUVEAU_GETPARAM_HAS_BO_USAGE:
getparam->value = 1;
break;
case NOUVEAU_GETPARAM_GRAPH_UNITS:
/* NV40 and NV50 versions are quite different, but register
* address is the same. User is supposed to know the card
@ -1051,7 +1064,7 @@ int nouveau_ioctl_getparam(struct drm_device *dev, void *data,
}
/* FALLTHRU */
default:
NV_ERROR(dev, "unknown parameter %lld\n", getparam->param);
NV_DEBUG(dev, "unknown parameter %lld\n", getparam->param);
return -EINVAL;
}
@ -1066,7 +1079,7 @@ nouveau_ioctl_setparam(struct drm_device *dev, void *data,
switch (setparam->param) {
default:
NV_ERROR(dev, "unknown parameter %lld\n", setparam->param);
NV_DEBUG(dev, "unknown parameter %lld\n", setparam->param);
return -EINVAL;
}

View file

@ -191,7 +191,7 @@ nv40_temp_get(struct drm_device *dev)
int offset = sensor->offset_mult / sensor->offset_div;
int core_temp;
if (dev_priv->chipset >= 0x50) {
if (dev_priv->card_type >= NV_50) {
core_temp = nv_rd32(dev, 0x20008);
} else {
core_temp = nv_rd32(dev, 0x0015b4) & 0x1fff;

View file

@ -158,7 +158,6 @@ nv_crtc_dpms(struct drm_crtc *crtc, int mode)
{
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct drm_connector *connector;
unsigned char seq1 = 0, crtc17 = 0;
unsigned char crtc1A;
@ -213,10 +212,6 @@ nv_crtc_dpms(struct drm_crtc *crtc, int mode)
NVVgaSeqReset(dev, nv_crtc->index, false);
NVWriteVgaCrtc(dev, nv_crtc->index, NV_CIO_CRE_RPC1_INDEX, crtc1A);
/* Update connector polling modes */
list_for_each_entry(connector, &dev->mode_config.connector_list, head)
nouveau_connector_set_polling(connector);
}
static bool
@ -831,7 +826,7 @@ nv04_crtc_do_mode_set_base(struct drm_crtc *crtc,
/* Update the framebuffer location. */
regp->fb_start = nv_crtc->fb.offset & ~3;
regp->fb_start += (y * drm_fb->pitch) + (x * drm_fb->bits_per_pixel / 8);
NVWriteCRTC(dev, nv_crtc->index, NV_PCRTC_START, regp->fb_start);
nv_set_crtc_base(dev, nv_crtc->index, regp->fb_start);
/* Update the arbitration parameters. */
nouveau_calc_arb(dev, crtc->mode.clock, drm_fb->bits_per_pixel,

View file

@ -185,14 +185,15 @@ static bool nv04_dfp_mode_fixup(struct drm_encoder *encoder,
struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
struct nouveau_connector *nv_connector = nouveau_encoder_connector_get(nv_encoder);
/* For internal panels and gpu scaling on DVI we need the native mode */
if (nv_connector->scaling_mode != DRM_MODE_SCALE_NONE) {
if (!nv_connector->native_mode)
return false;
if (!nv_connector->native_mode ||
nv_connector->scaling_mode == DRM_MODE_SCALE_NONE ||
mode->hdisplay > nv_connector->native_mode->hdisplay ||
mode->vdisplay > nv_connector->native_mode->vdisplay) {
nv_encoder->mode = *adjusted_mode;
} else {
nv_encoder->mode = *nv_connector->native_mode;
adjusted_mode->clock = nv_connector->native_mode->clock;
} else {
nv_encoder->mode = *adjusted_mode;
}
return true;

View file

@ -76,6 +76,15 @@ nv04_pm_clock_set(struct drm_device *dev, void *pre_state)
reg += 4;
nouveau_hw_setpll(dev, reg, &state->calc);
if (dev_priv->card_type < NV_30 && reg == NV_PRAMDAC_MPLL_COEFF) {
if (dev_priv->card_type == NV_20)
nv_mask(dev, 0x1002c4, 0, 1 << 20);
/* Reset the DLLs */
nv_mask(dev, 0x1002c0, 0, 1 << 8);
}
kfree(state);
}

View file

@ -51,24 +51,28 @@ nv50_calc_pll2(struct drm_device *dev, struct pll_lims *pll, int clk,
int *N, int *fN, int *M, int *P)
{
fixed20_12 fb_div, a, b;
u32 refclk = pll->refclk / 10;
u32 max_vco_freq = pll->vco1.maxfreq / 10;
u32 max_vco_inputfreq = pll->vco1.max_inputfreq / 10;
clk /= 10;
*P = pll->vco1.maxfreq / clk;
*P = max_vco_freq / clk;
if (*P > pll->max_p)
*P = pll->max_p;
if (*P < pll->min_p)
*P = pll->min_p;
/* *M = ceil(refclk / pll->vco.max_inputfreq); */
a.full = dfixed_const(pll->refclk);
b.full = dfixed_const(pll->vco1.max_inputfreq);
/* *M = floor((refclk + max_vco_inputfreq) / max_vco_inputfreq); */
a.full = dfixed_const(refclk + max_vco_inputfreq);
b.full = dfixed_const(max_vco_inputfreq);
a.full = dfixed_div(a, b);
a.full = dfixed_ceil(a);
a.full = dfixed_floor(a);
*M = dfixed_trunc(a);
/* fb_div = (vco * *M) / refclk; */
fb_div.full = dfixed_const(clk * *P);
fb_div.full = dfixed_mul(fb_div, a);
a.full = dfixed_const(pll->refclk);
a.full = dfixed_const(refclk);
fb_div.full = dfixed_div(fb_div, a);
/* *N = floor(fb_div); */

View file

@ -546,7 +546,7 @@ nv50_crtc_do_mode_set_base(struct drm_crtc *crtc,
}
nv_crtc->fb.offset = fb->nvbo->bo.offset - dev_priv->vm_vram_base;
nv_crtc->fb.tile_flags = fb->nvbo->tile_flags;
nv_crtc->fb.tile_flags = nouveau_bo_tile_layout(fb->nvbo);
nv_crtc->fb.cpp = drm_fb->bits_per_pixel / 8;
if (!nv_crtc->fb.blanked && dev_priv->chipset != 0x50) {
ret = RING_SPACE(evo, 2);
@ -578,7 +578,7 @@ nv50_crtc_do_mode_set_base(struct drm_crtc *crtc,
fb->nvbo->tile_mode);
}
if (dev_priv->chipset == 0x50)
OUT_RING(evo, (fb->nvbo->tile_flags << 8) | format);
OUT_RING(evo, (nv_crtc->fb.tile_flags << 8) | format);
else
OUT_RING(evo, format);

View file

@ -1032,11 +1032,18 @@ nv50_display_irq_hotplug_bh(struct work_struct *work)
struct drm_connector *connector;
const uint32_t gpio_reg[4] = { 0xe104, 0xe108, 0xe280, 0xe284 };
uint32_t unplug_mask, plug_mask, change_mask;
uint32_t hpd0, hpd1 = 0;
uint32_t hpd0, hpd1;
hpd0 = nv_rd32(dev, 0xe054) & nv_rd32(dev, 0xe050);
spin_lock_irq(&dev_priv->hpd_state.lock);
hpd0 = dev_priv->hpd_state.hpd0_bits;
dev_priv->hpd_state.hpd0_bits = 0;
hpd1 = dev_priv->hpd_state.hpd1_bits;
dev_priv->hpd_state.hpd1_bits = 0;
spin_unlock_irq(&dev_priv->hpd_state.lock);
hpd0 &= nv_rd32(dev, 0xe050);
if (dev_priv->chipset >= 0x90)
hpd1 = nv_rd32(dev, 0xe074) & nv_rd32(dev, 0xe070);
hpd1 &= nv_rd32(dev, 0xe070);
plug_mask = (hpd0 & 0x0000ffff) | (hpd1 << 16);
unplug_mask = (hpd0 >> 16) | (hpd1 & 0xffff0000);
@ -1078,10 +1085,6 @@ nv50_display_irq_hotplug_bh(struct work_struct *work)
helper->dpms(connector->encoder, DRM_MODE_DPMS_OFF);
}
nv_wr32(dev, 0xe054, nv_rd32(dev, 0xe054));
if (dev_priv->chipset >= 0x90)
nv_wr32(dev, 0xe074, nv_rd32(dev, 0xe074));
drm_helper_hpd_irq_event(dev);
}
@ -1092,8 +1095,22 @@ nv50_display_irq_handler(struct drm_device *dev)
uint32_t delayed = 0;
if (nv_rd32(dev, NV50_PMC_INTR_0) & NV50_PMC_INTR_0_HOTPLUG) {
if (!work_pending(&dev_priv->hpd_work))
queue_work(dev_priv->wq, &dev_priv->hpd_work);
uint32_t hpd0_bits, hpd1_bits = 0;
hpd0_bits = nv_rd32(dev, 0xe054);
nv_wr32(dev, 0xe054, hpd0_bits);
if (dev_priv->chipset >= 0x90) {
hpd1_bits = nv_rd32(dev, 0xe074);
nv_wr32(dev, 0xe074, hpd1_bits);
}
spin_lock(&dev_priv->hpd_state.lock);
dev_priv->hpd_state.hpd0_bits |= hpd0_bits;
dev_priv->hpd_state.hpd1_bits |= hpd1_bits;
spin_unlock(&dev_priv->hpd_state.lock);
queue_work(dev_priv->wq, &dev_priv->hpd_work);
}
while (nv_rd32(dev, NV50_PMC_INTR_0) & NV50_PMC_INTR_0_DISPLAY) {

View file

@ -464,3 +464,8 @@ nv50_fifo_unload_context(struct drm_device *dev)
return 0;
}
void
nv50_fifo_tlb_flush(struct drm_device *dev)
{
nv50_vm_flush(dev, 5);
}

View file

@ -402,3 +402,55 @@ struct nouveau_pgraph_object_class nv50_graph_grclass[] = {
{ 0x8597, false, NULL }, /* tesla (nva3, nva5, nva8) */
{}
};
void
nv50_graph_tlb_flush(struct drm_device *dev)
{
nv50_vm_flush(dev, 0);
}
void
nv86_graph_tlb_flush(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
struct nouveau_timer_engine *ptimer = &dev_priv->engine.timer;
bool idle, timeout = false;
unsigned long flags;
u64 start;
u32 tmp;
spin_lock_irqsave(&dev_priv->context_switch_lock, flags);
nv_mask(dev, 0x400500, 0x00000001, 0x00000000);
start = ptimer->read(dev);
do {
idle = true;
for (tmp = nv_rd32(dev, 0x400380); tmp && idle; tmp >>= 3) {
if ((tmp & 7) == 1)
idle = false;
}
for (tmp = nv_rd32(dev, 0x400384); tmp && idle; tmp >>= 3) {
if ((tmp & 7) == 1)
idle = false;
}
for (tmp = nv_rd32(dev, 0x400388); tmp && idle; tmp >>= 3) {
if ((tmp & 7) == 1)
idle = false;
}
} while (!idle && !(timeout = ptimer->read(dev) - start > 2000000000));
if (timeout) {
NV_ERROR(dev, "PGRAPH TLB flush idle timeout fail: "
"0x%08x 0x%08x 0x%08x 0x%08x\n",
nv_rd32(dev, 0x400700), nv_rd32(dev, 0x400380),
nv_rd32(dev, 0x400384), nv_rd32(dev, 0x400388));
}
nv50_vm_flush(dev, 0);
nv_mask(dev, 0x400500, 0x00000001, 0x00000001);
spin_unlock_irqrestore(&dev_priv->context_switch_lock, flags);
}

View file

@ -402,7 +402,6 @@ nv50_instmem_bind(struct drm_device *dev, struct nouveau_gpuobj *gpuobj)
}
dev_priv->engine.instmem.flush(dev);
nv50_vm_flush(dev, 4);
nv50_vm_flush(dev, 6);
gpuobj->im_bound = 1;

View file

@ -1650,7 +1650,36 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
}
}
rdev->config.evergreen.tile_config = gb_addr_config;
/* setup tiling info dword. gb_addr_config is not adequate since it does
* not have bank info, so create a custom tiling dword.
* bits 3:0 num_pipes
* bits 7:4 num_banks
* bits 11:8 group_size
* bits 15:12 row_size
*/
rdev->config.evergreen.tile_config = 0;
switch (rdev->config.evergreen.max_tile_pipes) {
case 1:
default:
rdev->config.evergreen.tile_config |= (0 << 0);
break;
case 2:
rdev->config.evergreen.tile_config |= (1 << 0);
break;
case 4:
rdev->config.evergreen.tile_config |= (2 << 0);
break;
case 8:
rdev->config.evergreen.tile_config |= (3 << 0);
break;
}
rdev->config.evergreen.tile_config |=
((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) << 4;
rdev->config.evergreen.tile_config |=
((mc_arb_ramcfg & BURSTLENGTH_MASK) >> BURSTLENGTH_SHIFT) << 8;
rdev->config.evergreen.tile_config |=
((gb_addr_config & 0x30000000) >> 28) << 12;
WREG32(GB_BACKEND_MAP, gb_backend_map);
WREG32(GB_ADDR_CONFIG, gb_addr_config);
WREG32(DMIF_ADDR_CONFIG, gb_addr_config);

View file

@ -459,7 +459,7 @@ int evergreen_blit_init(struct radeon_device *rdev)
obj_size += evergreen_ps_size * 4;
obj_size = ALIGN(obj_size, 256);
r = radeon_bo_create(rdev, NULL, obj_size, true, RADEON_GEM_DOMAIN_VRAM,
r = radeon_bo_create(rdev, NULL, obj_size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
&rdev->r600_blit.shader_obj);
if (r) {
DRM_ERROR("evergreen failed to allocate shader\n");

View file

@ -2718,7 +2718,7 @@ static int r600_ih_ring_alloc(struct radeon_device *rdev)
/* Allocate ring buffer */
if (rdev->ih.ring_obj == NULL) {
r = radeon_bo_create(rdev, NULL, rdev->ih.ring_size,
true,
PAGE_SIZE, true,
RADEON_GEM_DOMAIN_GTT,
&rdev->ih.ring_obj);
if (r) {

View file

@ -501,7 +501,7 @@ int r600_blit_init(struct radeon_device *rdev)
obj_size += r6xx_ps_size * 4;
obj_size = ALIGN(obj_size, 256);
r = radeon_bo_create(rdev, NULL, obj_size, true, RADEON_GEM_DOMAIN_VRAM,
r = radeon_bo_create(rdev, NULL, obj_size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
&rdev->r600_blit.shader_obj);
if (r) {
DRM_ERROR("r600 failed to allocate shader\n");

View file

@ -50,6 +50,7 @@ struct r600_cs_track {
u32 nsamples;
u32 cb_color_base_last[8];
struct radeon_bo *cb_color_bo[8];
u64 cb_color_bo_mc[8];
u32 cb_color_bo_offset[8];
struct radeon_bo *cb_color_frag_bo[8];
struct radeon_bo *cb_color_tile_bo[8];
@ -67,6 +68,7 @@ struct r600_cs_track {
u32 db_depth_size;
u32 db_offset;
struct radeon_bo *db_bo;
u64 db_bo_mc;
};
static inline int r600_bpe_from_format(u32 *bpe, u32 format)
@ -140,6 +142,68 @@ static inline int r600_bpe_from_format(u32 *bpe, u32 format)
return 0;
}
struct array_mode_checker {
int array_mode;
u32 group_size;
u32 nbanks;
u32 npipes;
u32 nsamples;
u32 bpe;
};
/* returns alignment in pixels for pitch/height/depth and bytes for base */
static inline int r600_get_array_mode_alignment(struct array_mode_checker *values,
u32 *pitch_align,
u32 *height_align,
u32 *depth_align,
u64 *base_align)
{
u32 tile_width = 8;
u32 tile_height = 8;
u32 macro_tile_width = values->nbanks;
u32 macro_tile_height = values->npipes;
u32 tile_bytes = tile_width * tile_height * values->bpe * values->nsamples;
u32 macro_tile_bytes = macro_tile_width * macro_tile_height * tile_bytes;
switch (values->array_mode) {
case ARRAY_LINEAR_GENERAL:
/* technically tile_width/_height for pitch/height */
*pitch_align = 1; /* tile_width */
*height_align = 1; /* tile_height */
*depth_align = 1;
*base_align = 1;
break;
case ARRAY_LINEAR_ALIGNED:
*pitch_align = max((u32)64, (u32)(values->group_size / values->bpe));
*height_align = tile_height;
*depth_align = 1;
*base_align = values->group_size;
break;
case ARRAY_1D_TILED_THIN1:
*pitch_align = max((u32)tile_width,
(u32)(values->group_size /
(tile_height * values->bpe * values->nsamples)));
*height_align = tile_height;
*depth_align = 1;
*base_align = values->group_size;
break;
case ARRAY_2D_TILED_THIN1:
*pitch_align = max((u32)macro_tile_width,
(u32)(((values->group_size / tile_height) /
(values->bpe * values->nsamples)) *
values->nbanks)) * tile_width;
*height_align = macro_tile_height * tile_height;
*depth_align = 1;
*base_align = max(macro_tile_bytes,
(*pitch_align) * values->bpe * (*height_align) * values->nsamples);
break;
default:
return -EINVAL;
}
return 0;
}
static void r600_cs_track_init(struct r600_cs_track *track)
{
int i;
@ -153,10 +217,12 @@ static void r600_cs_track_init(struct r600_cs_track *track)
track->cb_color_info[i] = 0;
track->cb_color_bo[i] = NULL;
track->cb_color_bo_offset[i] = 0xFFFFFFFF;
track->cb_color_bo_mc[i] = 0xFFFFFFFF;
}
track->cb_target_mask = 0xFFFFFFFF;
track->cb_shader_mask = 0xFFFFFFFF;
track->db_bo = NULL;
track->db_bo_mc = 0xFFFFFFFF;
/* assume the biggest format and that htile is enabled */
track->db_depth_info = 7 | (1 << 25);
track->db_depth_view = 0xFFFFC000;
@ -168,7 +234,10 @@ static void r600_cs_track_init(struct r600_cs_track *track)
static inline int r600_cs_track_validate_cb(struct radeon_cs_parser *p, int i)
{
struct r600_cs_track *track = p->track;
u32 bpe = 0, pitch, slice_tile_max, size, tmp, height, pitch_align;
u32 bpe = 0, slice_tile_max, size, tmp;
u32 height, height_align, pitch, pitch_align, depth_align;
u64 base_offset, base_align;
struct array_mode_checker array_check;
volatile u32 *ib = p->ib->ptr;
unsigned array_mode;
@ -183,60 +252,40 @@ static inline int r600_cs_track_validate_cb(struct radeon_cs_parser *p, int i)
i, track->cb_color_info[i]);
return -EINVAL;
}
/* pitch is the number of 8x8 tiles per row */
pitch = G_028060_PITCH_TILE_MAX(track->cb_color_size[i]) + 1;
/* pitch in pixels */
pitch = (G_028060_PITCH_TILE_MAX(track->cb_color_size[i]) + 1) * 8;
slice_tile_max = G_028060_SLICE_TILE_MAX(track->cb_color_size[i]) + 1;
slice_tile_max *= 64;
height = slice_tile_max / (pitch * 8);
height = slice_tile_max / pitch;
if (height > 8192)
height = 8192;
array_mode = G_0280A0_ARRAY_MODE(track->cb_color_info[i]);
base_offset = track->cb_color_bo_mc[i] + track->cb_color_bo_offset[i];
array_check.array_mode = array_mode;
array_check.group_size = track->group_size;
array_check.nbanks = track->nbanks;
array_check.npipes = track->npipes;
array_check.nsamples = track->nsamples;
array_check.bpe = bpe;
if (r600_get_array_mode_alignment(&array_check,
&pitch_align, &height_align, &depth_align, &base_align)) {
dev_warn(p->dev, "%s invalid tiling %d for %d (0x%08X)\n", __func__,
G_0280A0_ARRAY_MODE(track->cb_color_info[i]), i,
track->cb_color_info[i]);
return -EINVAL;
}
switch (array_mode) {
case V_0280A0_ARRAY_LINEAR_GENERAL:
/* technically height & 0x7 */
break;
case V_0280A0_ARRAY_LINEAR_ALIGNED:
pitch_align = max((u32)64, (u32)(track->group_size / bpe)) / 8;
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
if (!IS_ALIGNED(height, 8)) {
dev_warn(p->dev, "%s:%d cb height (%d) invalid\n",
__func__, __LINE__, height);
return -EINVAL;
}
break;
case V_0280A0_ARRAY_1D_TILED_THIN1:
pitch_align = max((u32)8, (u32)(track->group_size / (8 * bpe * track->nsamples))) / 8;
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
/* avoid breaking userspace */
if (height > 7)
height &= ~0x7;
if (!IS_ALIGNED(height, 8)) {
dev_warn(p->dev, "%s:%d cb height (%d) invalid\n",
__func__, __LINE__, height);
return -EINVAL;
}
break;
case V_0280A0_ARRAY_2D_TILED_THIN1:
pitch_align = max((u32)track->nbanks,
(u32)(((track->group_size / 8) / (bpe * track->nsamples)) * track->nbanks)) / 8;
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
if (!IS_ALIGNED((height / 8), track->npipes)) {
dev_warn(p->dev, "%s:%d cb height (%d) invalid\n",
__func__, __LINE__, height);
return -EINVAL;
}
break;
default:
dev_warn(p->dev, "%s invalid tiling %d for %d (0x%08X)\n", __func__,
@ -244,13 +293,29 @@ static inline int r600_cs_track_validate_cb(struct radeon_cs_parser *p, int i)
track->cb_color_info[i]);
return -EINVAL;
}
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d cb pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
if (!IS_ALIGNED(height, height_align)) {
dev_warn(p->dev, "%s:%d cb height (%d) invalid\n",
__func__, __LINE__, height);
return -EINVAL;
}
if (!IS_ALIGNED(base_offset, base_align)) {
dev_warn(p->dev, "%s offset[%d] 0x%llx not aligned\n", __func__, i, base_offset);
return -EINVAL;
}
/* check offset */
tmp = height * pitch * 8 * bpe;
tmp = height * pitch * bpe;
if ((tmp + track->cb_color_bo_offset[i]) > radeon_bo_size(track->cb_color_bo[i])) {
if (array_mode == V_0280A0_ARRAY_LINEAR_GENERAL) {
/* the initial DDX does bad things with the CB size occasionally */
/* it rounds up height too far for slice tile max but the BO is smaller */
tmp = (height - 7) * 8 * bpe;
tmp = (height - 7) * pitch * bpe;
if ((tmp + track->cb_color_bo_offset[i]) > radeon_bo_size(track->cb_color_bo[i])) {
dev_warn(p->dev, "%s offset[%d] %d %d %lu too big\n", __func__, i, track->cb_color_bo_offset[i], tmp, radeon_bo_size(track->cb_color_bo[i]));
return -EINVAL;
@ -260,15 +325,11 @@ static inline int r600_cs_track_validate_cb(struct radeon_cs_parser *p, int i)
return -EINVAL;
}
}
if (!IS_ALIGNED(track->cb_color_bo_offset[i], track->group_size)) {
dev_warn(p->dev, "%s offset[%d] %d not aligned\n", __func__, i, track->cb_color_bo_offset[i]);
return -EINVAL;
}
/* limit max tile */
tmp = (height * pitch * 8) >> 6;
tmp = (height * pitch) >> 6;
if (tmp < slice_tile_max)
slice_tile_max = tmp;
tmp = S_028060_PITCH_TILE_MAX(pitch - 1) |
tmp = S_028060_PITCH_TILE_MAX((pitch / 8) - 1) |
S_028060_SLICE_TILE_MAX(slice_tile_max - 1);
ib[track->cb_color_size_idx[i]] = tmp;
return 0;
@ -310,7 +371,12 @@ static int r600_cs_track_check(struct radeon_cs_parser *p)
/* Check depth buffer */
if (G_028800_STENCIL_ENABLE(track->db_depth_control) ||
G_028800_Z_ENABLE(track->db_depth_control)) {
u32 nviews, bpe, ntiles, pitch, pitch_align, height, size, slice_tile_max;
u32 nviews, bpe, ntiles, size, slice_tile_max;
u32 height, height_align, pitch, pitch_align, depth_align;
u64 base_offset, base_align;
struct array_mode_checker array_check;
int array_mode;
if (track->db_bo == NULL) {
dev_warn(p->dev, "z/stencil with no depth buffer\n");
return -EINVAL;
@ -353,41 +419,34 @@ static int r600_cs_track_check(struct radeon_cs_parser *p)
ib[track->db_depth_size_idx] = S_028000_SLICE_TILE_MAX(tmp - 1) | (track->db_depth_size & 0x3FF);
} else {
size = radeon_bo_size(track->db_bo);
pitch = G_028000_PITCH_TILE_MAX(track->db_depth_size) + 1;
/* pitch in pixels */
pitch = (G_028000_PITCH_TILE_MAX(track->db_depth_size) + 1) * 8;
slice_tile_max = G_028000_SLICE_TILE_MAX(track->db_depth_size) + 1;
slice_tile_max *= 64;
height = slice_tile_max / (pitch * 8);
height = slice_tile_max / pitch;
if (height > 8192)
height = 8192;
switch (G_028010_ARRAY_MODE(track->db_depth_info)) {
base_offset = track->db_bo_mc + track->db_offset;
array_mode = G_028010_ARRAY_MODE(track->db_depth_info);
array_check.array_mode = array_mode;
array_check.group_size = track->group_size;
array_check.nbanks = track->nbanks;
array_check.npipes = track->npipes;
array_check.nsamples = track->nsamples;
array_check.bpe = bpe;
if (r600_get_array_mode_alignment(&array_check,
&pitch_align, &height_align, &depth_align, &base_align)) {
dev_warn(p->dev, "%s invalid tiling %d (0x%08X)\n", __func__,
G_028010_ARRAY_MODE(track->db_depth_info),
track->db_depth_info);
return -EINVAL;
}
switch (array_mode) {
case V_028010_ARRAY_1D_TILED_THIN1:
pitch_align = (max((u32)8, (u32)(track->group_size / (8 * bpe))) / 8);
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d db pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
/* don't break userspace */
height &= ~0x7;
if (!IS_ALIGNED(height, 8)) {
dev_warn(p->dev, "%s:%d db height (%d) invalid\n",
__func__, __LINE__, height);
return -EINVAL;
}
break;
case V_028010_ARRAY_2D_TILED_THIN1:
pitch_align = max((u32)track->nbanks,
(u32)(((track->group_size / 8) / bpe) * track->nbanks)) / 8;
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d db pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
if (!IS_ALIGNED((height / 8), track->npipes)) {
dev_warn(p->dev, "%s:%d db height (%d) invalid\n",
__func__, __LINE__, height);
return -EINVAL;
}
break;
default:
dev_warn(p->dev, "%s invalid tiling %d (0x%08X)\n", __func__,
@ -395,15 +454,27 @@ static int r600_cs_track_check(struct radeon_cs_parser *p)
track->db_depth_info);
return -EINVAL;
}
if (!IS_ALIGNED(track->db_offset, track->group_size)) {
dev_warn(p->dev, "%s offset[%d] %d not aligned\n", __func__, i, track->db_offset);
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d db pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
if (!IS_ALIGNED(height, height_align)) {
dev_warn(p->dev, "%s:%d db height (%d) invalid\n",
__func__, __LINE__, height);
return -EINVAL;
}
if (!IS_ALIGNED(base_offset, base_align)) {
dev_warn(p->dev, "%s offset[%d] 0x%llx not aligned\n", __func__, i, base_offset);
return -EINVAL;
}
ntiles = G_028000_SLICE_TILE_MAX(track->db_depth_size) + 1;
nviews = G_028004_SLICE_MAX(track->db_depth_view) + 1;
tmp = ntiles * bpe * 64 * nviews;
if ((tmp + track->db_offset) > radeon_bo_size(track->db_bo)) {
dev_warn(p->dev, "z/stencil buffer too small (0x%08X %d %d %d -> %d have %ld)\n",
dev_warn(p->dev, "z/stencil buffer too small (0x%08X %d %d %d -> %u have %lu)\n",
track->db_depth_size, ntiles, nviews, bpe, tmp + track->db_offset,
radeon_bo_size(track->db_bo));
return -EINVAL;
@ -954,6 +1025,7 @@ static inline int r600_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u32 idx
ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
track->cb_color_base_last[tmp] = ib[idx];
track->cb_color_bo[tmp] = reloc->robj;
track->cb_color_bo_mc[tmp] = reloc->lobj.gpu_offset;
break;
case DB_DEPTH_BASE:
r = r600_cs_packet_next_reloc(p, &reloc);
@ -965,6 +1037,7 @@ static inline int r600_cs_check_reg(struct radeon_cs_parser *p, u32 reg, u32 idx
track->db_offset = radeon_get_ib_value(p, idx) << 8;
ib[idx] += (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
track->db_bo = reloc->robj;
track->db_bo_mc = reloc->lobj.gpu_offset;
break;
case DB_HTILE_DATA_BASE:
case SQ_PGM_START_FS:
@ -1086,16 +1159,25 @@ static void r600_texture_size(unsigned nfaces, unsigned blevel, unsigned nlevels
static inline int r600_check_texture_resource(struct radeon_cs_parser *p, u32 idx,
struct radeon_bo *texture,
struct radeon_bo *mipmap,
u64 base_offset,
u64 mip_offset,
u32 tiling_flags)
{
struct r600_cs_track *track = p->track;
u32 nfaces, nlevels, blevel, w0, h0, d0, bpe = 0;
u32 word0, word1, l0_size, mipmap_size, pitch, pitch_align;
u32 word0, word1, l0_size, mipmap_size;
u32 height_align, pitch, pitch_align, depth_align;
u64 base_align;
struct array_mode_checker array_check;
/* on legacy kernel we don't perform advanced check */
if (p->rdev == NULL)
return 0;
/* convert to bytes */
base_offset <<= 8;
mip_offset <<= 8;
word0 = radeon_get_ib_value(p, idx + 0);
if (tiling_flags & RADEON_TILING_MACRO)
word0 |= S_038000_TILE_MODE(V_038000_ARRAY_2D_TILED_THIN1);
@ -1128,46 +1210,38 @@ static inline int r600_check_texture_resource(struct radeon_cs_parser *p, u32 i
return -EINVAL;
}
pitch = G_038000_PITCH(word0) + 1;
switch (G_038000_TILE_MODE(word0)) {
case V_038000_ARRAY_LINEAR_GENERAL:
pitch_align = 1;
/* XXX check height align */
break;
case V_038000_ARRAY_LINEAR_ALIGNED:
pitch_align = max((u32)64, (u32)(track->group_size / bpe)) / 8;
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
/* XXX check height align */
break;
case V_038000_ARRAY_1D_TILED_THIN1:
pitch_align = max((u32)8, (u32)(track->group_size / (8 * bpe))) / 8;
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
/* XXX check height align */
break;
case V_038000_ARRAY_2D_TILED_THIN1:
pitch_align = max((u32)track->nbanks,
(u32)(((track->group_size / 8) / bpe) * track->nbanks)) / 8;
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
/* XXX check height align */
break;
default:
dev_warn(p->dev, "%s invalid tiling %d (0x%08X)\n", __func__,
G_038000_TILE_MODE(word0), word0);
/* pitch in texels */
pitch = (G_038000_PITCH(word0) + 1) * 8;
array_check.array_mode = G_038000_TILE_MODE(word0);
array_check.group_size = track->group_size;
array_check.nbanks = track->nbanks;
array_check.npipes = track->npipes;
array_check.nsamples = 1;
array_check.bpe = bpe;
if (r600_get_array_mode_alignment(&array_check,
&pitch_align, &height_align, &depth_align, &base_align)) {
dev_warn(p->dev, "%s:%d tex array mode (%d) invalid\n",
__func__, __LINE__, G_038000_TILE_MODE(word0));
return -EINVAL;
}
/* XXX check height as well... */
if (!IS_ALIGNED(pitch, pitch_align)) {
dev_warn(p->dev, "%s:%d tex pitch (%d) invalid\n",
__func__, __LINE__, pitch);
return -EINVAL;
}
if (!IS_ALIGNED(base_offset, base_align)) {
dev_warn(p->dev, "%s:%d tex base offset (0x%llx) invalid\n",
__func__, __LINE__, base_offset);
return -EINVAL;
}
if (!IS_ALIGNED(mip_offset, base_align)) {
dev_warn(p->dev, "%s:%d tex mip offset (0x%llx) invalid\n",
__func__, __LINE__, mip_offset);
return -EINVAL;
}
/* XXX check offset align */
word0 = radeon_get_ib_value(p, idx + 4);
word1 = radeon_get_ib_value(p, idx + 5);
@ -1402,7 +1476,10 @@ static int r600_packet3_check(struct radeon_cs_parser *p,
mip_offset = (u32)((reloc->lobj.gpu_offset >> 8) & 0xffffffff);
mipmap = reloc->robj;
r = r600_check_texture_resource(p, idx+(i*7)+1,
texture, mipmap, reloc->lobj.tiling_flags);
texture, mipmap,
base_offset + radeon_get_ib_value(p, idx+1+(i*7)+2),
mip_offset + radeon_get_ib_value(p, idx+1+(i*7)+3),
reloc->lobj.tiling_flags);
if (r)
return r;
ib[idx+1+(i*7)+2] += base_offset;

View file

@ -51,6 +51,12 @@
#define PTE_READABLE (1 << 5)
#define PTE_WRITEABLE (1 << 6)
/* tiling bits */
#define ARRAY_LINEAR_GENERAL 0x00000000
#define ARRAY_LINEAR_ALIGNED 0x00000001
#define ARRAY_1D_TILED_THIN1 0x00000002
#define ARRAY_2D_TILED_THIN1 0x00000004
/* Registers */
#define ARB_POP 0x2418
#define ENABLE_TC128 (1 << 30)

View file

@ -1262,6 +1262,10 @@ void r100_pll_errata_after_index(struct radeon_device *rdev);
(rdev->family == CHIP_RS400) || \
(rdev->family == CHIP_RS480))
#define ASIC_IS_AVIVO(rdev) ((rdev->family >= CHIP_RS600))
#define ASIC_IS_DCE2(rdev) ((rdev->family == CHIP_RS600) || \
(rdev->family == CHIP_RS690) || \
(rdev->family == CHIP_RS740) || \
(rdev->family >= CHIP_R600))
#define ASIC_IS_DCE3(rdev) ((rdev->family >= CHIP_RV620))
#define ASIC_IS_DCE32(rdev) ((rdev->family >= CHIP_RV730))
#define ASIC_IS_DCE4(rdev) ((rdev->family >= CHIP_CEDAR))

View file

@ -41,7 +41,7 @@ void radeon_benchmark_move(struct radeon_device *rdev, unsigned bsize,
size = bsize;
n = 1024;
r = radeon_bo_create(rdev, NULL, size, true, sdomain, &sobj);
r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, sdomain, &sobj);
if (r) {
goto out_cleanup;
}
@ -53,7 +53,7 @@ void radeon_benchmark_move(struct radeon_device *rdev, unsigned bsize,
if (r) {
goto out_cleanup;
}
r = radeon_bo_create(rdev, NULL, size, true, ddomain, &dobj);
r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, ddomain, &dobj);
if (r) {
goto out_cleanup;
}

View file

@ -571,6 +571,7 @@ static struct radeon_i2c_bus_rec combios_setup_i2c_bus(struct radeon_device *rde
}
if (clk_mask && data_mask) {
/* system specific masks */
i2c.mask_clk_mask = clk_mask;
i2c.mask_data_mask = data_mask;
i2c.a_clk_mask = clk_mask;
@ -579,7 +580,19 @@ static struct radeon_i2c_bus_rec combios_setup_i2c_bus(struct radeon_device *rde
i2c.en_data_mask = data_mask;
i2c.y_clk_mask = clk_mask;
i2c.y_data_mask = data_mask;
} else if ((ddc_line == RADEON_GPIOPAD_MASK) ||
(ddc_line == RADEON_MDGPIO_MASK)) {
/* default gpiopad masks */
i2c.mask_clk_mask = (0x20 << 8);
i2c.mask_data_mask = 0x80;
i2c.a_clk_mask = (0x20 << 8);
i2c.a_data_mask = 0x80;
i2c.en_clk_mask = (0x20 << 8);
i2c.en_data_mask = 0x80;
i2c.y_clk_mask = (0x20 << 8);
i2c.y_data_mask = 0x80;
} else {
/* default masks for ddc pads */
i2c.mask_clk_mask = RADEON_GPIO_EN_1;
i2c.mask_data_mask = RADEON_GPIO_EN_0;
i2c.a_clk_mask = RADEON_GPIO_A_1;

View file

@ -1008,9 +1008,21 @@ static void radeon_dp_connector_destroy(struct drm_connector *connector)
static int radeon_dp_get_modes(struct drm_connector *connector)
{
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_connector_atom_dig *radeon_dig_connector = radeon_connector->con_priv;
int ret;
if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
if (!radeon_dig_connector->edp_on)
atombios_set_edp_panel_power(connector,
ATOM_TRANSMITTER_ACTION_POWER_ON);
}
ret = radeon_ddc_get_modes(radeon_connector);
if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
if (!radeon_dig_connector->edp_on)
atombios_set_edp_panel_power(connector,
ATOM_TRANSMITTER_ACTION_POWER_OFF);
}
return ret;
}
@ -1029,8 +1041,14 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) {
/* eDP is always DP */
radeon_dig_connector->dp_sink_type = CONNECTOR_OBJECT_ID_DISPLAYPORT;
if (!radeon_dig_connector->edp_on)
atombios_set_edp_panel_power(connector,
ATOM_TRANSMITTER_ACTION_POWER_ON);
if (radeon_dp_getdpcd(radeon_connector))
ret = connector_status_connected;
if (!radeon_dig_connector->edp_on)
atombios_set_edp_panel_power(connector,
ATOM_TRANSMITTER_ACTION_POWER_OFF);
} else {
radeon_dig_connector->dp_sink_type = radeon_dp_getsinktype(radeon_connector);
if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {

View file

@ -180,7 +180,7 @@ int radeon_wb_init(struct radeon_device *rdev)
int r;
if (rdev->wb.wb_obj == NULL) {
r = radeon_bo_create(rdev, NULL, RADEON_GPU_PAGE_SIZE, true,
r = radeon_bo_create(rdev, NULL, RADEON_GPU_PAGE_SIZE, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_GTT, &rdev->wb.wb_obj);
if (r) {
dev_warn(rdev->dev, "(%d) create WB bo failed\n", r);

View file

@ -176,6 +176,7 @@ static inline bool radeon_encoder_is_digital(struct drm_encoder *encoder)
return false;
}
}
void
radeon_link_encoder_connector(struct drm_device *dev)
{
@ -228,6 +229,27 @@ radeon_get_connector_for_encoder(struct drm_encoder *encoder)
return NULL;
}
struct drm_encoder *radeon_atom_get_external_encoder(struct drm_encoder *encoder)
{
struct drm_device *dev = encoder->dev;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct drm_encoder *other_encoder;
struct radeon_encoder *other_radeon_encoder;
if (radeon_encoder->is_ext_encoder)
return NULL;
list_for_each_entry(other_encoder, &dev->mode_config.encoder_list, head) {
if (other_encoder == encoder)
continue;
other_radeon_encoder = to_radeon_encoder(other_encoder);
if (other_radeon_encoder->is_ext_encoder &&
(radeon_encoder->devices & other_radeon_encoder->devices))
return other_encoder;
}
return NULL;
}
void radeon_panel_mode_fixup(struct drm_encoder *encoder,
struct drm_display_mode *adjusted_mode)
{
@ -426,52 +448,49 @@ atombios_tv_setup(struct drm_encoder *encoder, int action)
}
union dvo_encoder_control {
ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION ext_tmds;
DVO_ENCODER_CONTROL_PS_ALLOCATION dvo;
DVO_ENCODER_CONTROL_PS_ALLOCATION_V3 dvo_v3;
};
void
atombios_external_tmds_setup(struct drm_encoder *encoder, int action)
atombios_dvo_setup(struct drm_encoder *encoder, int action)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
ENABLE_EXTERNAL_TMDS_ENCODER_PS_ALLOCATION args;
int index = 0;
union dvo_encoder_control args;
int index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl);
memset(&args, 0, sizeof(args));
index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl);
if (ASIC_IS_DCE3(rdev)) {
/* DCE3+ */
args.dvo_v3.ucAction = action;
args.dvo_v3.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10);
args.dvo_v3.ucDVOConfig = 0; /* XXX */
} else if (ASIC_IS_DCE2(rdev)) {
/* DCE2 (pre-DCE3 R6xx, RS600/690/740 */
args.dvo.sDVOEncoder.ucAction = action;
args.dvo.sDVOEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10);
/* DFP1, CRT1, TV1 depending on the type of port */
args.dvo.sDVOEncoder.ucDeviceType = ATOM_DEVICE_DFP1_INDEX;
args.sXTmdsEncoder.ucEnable = action;
if (radeon_encoder->pixel_clock > 165000)
args.dvo.sDVOEncoder.usDevAttr.sDigAttrib.ucAttribute |= PANEL_ENCODER_MISC_DUAL;
} else {
/* R4xx, R5xx */
args.ext_tmds.sXTmdsEncoder.ucEnable = action;
if (radeon_encoder->pixel_clock > 165000)
args.sXTmdsEncoder.ucMisc = PANEL_ENCODER_MISC_DUAL;
if (radeon_encoder->pixel_clock > 165000)
args.ext_tmds.sXTmdsEncoder.ucMisc |= PANEL_ENCODER_MISC_DUAL;
/*if (pScrn->rgbBits == 8)*/
args.sXTmdsEncoder.ucMisc |= (1 << 1);
/*if (pScrn->rgbBits == 8)*/
args.ext_tmds.sXTmdsEncoder.ucMisc |= ATOM_PANEL_MISC_888RGB;
}
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
static void
atombios_ddia_setup(struct drm_encoder *encoder, int action)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
DVO_ENCODER_CONTROL_PS_ALLOCATION args;
int index = 0;
memset(&args, 0, sizeof(args));
index = GetIndexIntoMasterTable(COMMAND, DVOEncoderControl);
args.sDVOEncoder.ucAction = action;
args.sDVOEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10);
if (radeon_encoder->pixel_clock > 165000)
args.sDVOEncoder.usDevAttr.sDigAttrib.ucAttribute = PANEL_ENCODER_MISC_DUAL;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
union lvds_encoder_control {
@ -532,14 +551,14 @@ atombios_digital_setup(struct drm_encoder *encoder, int action)
if (dig->lcd_misc & ATOM_PANEL_MISC_DUAL)
args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL;
if (dig->lcd_misc & ATOM_PANEL_MISC_888RGB)
args.v1.ucMisc |= (1 << 1);
args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB;
} else {
if (dig->linkb)
args.v1.ucMisc |= PANEL_ENCODER_MISC_TMDS_LINKB;
if (radeon_encoder->pixel_clock > 165000)
args.v1.ucMisc |= PANEL_ENCODER_MISC_DUAL;
/*if (pScrn->rgbBits == 8) */
args.v1.ucMisc |= (1 << 1);
args.v1.ucMisc |= ATOM_PANEL_MISC_888RGB;
}
break;
case 2:
@ -595,6 +614,7 @@ atombios_digital_setup(struct drm_encoder *encoder, int action)
int
atombios_get_encoder_mode(struct drm_encoder *encoder)
{
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct drm_connector *connector;
@ -602,9 +622,20 @@ atombios_get_encoder_mode(struct drm_encoder *encoder)
struct radeon_connector_atom_dig *dig_connector;
connector = radeon_get_connector_for_encoder(encoder);
if (!connector)
return 0;
if (!connector) {
switch (radeon_encoder->encoder_id) {
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
return ATOM_ENCODER_MODE_DVI;
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
default:
return ATOM_ENCODER_MODE_CRT;
}
}
radeon_connector = to_radeon_connector(connector);
switch (connector->connector_type) {
@ -834,6 +865,9 @@ atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t
memset(&args, 0, sizeof(args));
switch (radeon_encoder->encoder_id) {
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl);
break;
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
@ -978,6 +1012,105 @@ atombios_dig_transmitter_setup(struct drm_encoder *encoder, int action, uint8_t
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
void
atombios_set_edp_panel_power(struct drm_connector *connector, int action)
{
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct drm_device *dev = radeon_connector->base.dev;
struct radeon_device *rdev = dev->dev_private;
union dig_transmitter_control args;
int index = GetIndexIntoMasterTable(COMMAND, UNIPHYTransmitterControl);
uint8_t frev, crev;
if (connector->connector_type != DRM_MODE_CONNECTOR_eDP)
return;
if (!ASIC_IS_DCE4(rdev))
return;
if ((action != ATOM_TRANSMITTER_ACTION_POWER_ON) ||
(action != ATOM_TRANSMITTER_ACTION_POWER_OFF))
return;
if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))
return;
memset(&args, 0, sizeof(args));
args.v1.ucAction = action;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
union external_encoder_control {
EXTERNAL_ENCODER_CONTROL_PS_ALLOCATION v1;
};
static void
atombios_external_encoder_setup(struct drm_encoder *encoder,
struct drm_encoder *ext_encoder,
int action)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
union external_encoder_control args;
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
int index = GetIndexIntoMasterTable(COMMAND, ExternalEncoderControl);
u8 frev, crev;
int dp_clock = 0;
int dp_lane_count = 0;
int connector_object_id = 0;
if (connector) {
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_connector_atom_dig *dig_connector =
radeon_connector->con_priv;
dp_clock = dig_connector->dp_clock;
dp_lane_count = dig_connector->dp_lane_count;
connector_object_id =
(radeon_connector->connector_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT;
}
memset(&args, 0, sizeof(args));
if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))
return;
switch (frev) {
case 1:
/* no params on frev 1 */
break;
case 2:
switch (crev) {
case 1:
case 2:
args.v1.sDigEncoder.ucAction = action;
args.v1.sDigEncoder.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10);
args.v1.sDigEncoder.ucEncoderMode = atombios_get_encoder_mode(encoder);
if (args.v1.sDigEncoder.ucEncoderMode == ATOM_ENCODER_MODE_DP) {
if (dp_clock == 270000)
args.v1.sDigEncoder.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ;
args.v1.sDigEncoder.ucLaneNum = dp_lane_count;
} else if (radeon_encoder->pixel_clock > 165000)
args.v1.sDigEncoder.ucLaneNum = 8;
else
args.v1.sDigEncoder.ucLaneNum = 4;
break;
default:
DRM_ERROR("Unknown table version: %d, %d\n", frev, crev);
return;
}
break;
default:
DRM_ERROR("Unknown table version: %d, %d\n", frev, crev);
return;
}
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
static void
atombios_yuv_setup(struct drm_encoder *encoder, bool enable)
{
@ -1021,6 +1154,7 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder);
DISPLAY_DEVICE_OUTPUT_CONTROL_PS_ALLOCATION args;
int index = 0;
bool is_dig = false;
@ -1043,9 +1177,14 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
break;
case ENCODER_OBJECT_ID_INTERNAL_DVO1:
case ENCODER_OBJECT_ID_INTERNAL_DDI:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl);
break;
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
if (ASIC_IS_DCE3(rdev))
is_dig = true;
else
index = GetIndexIntoMasterTable(COMMAND, DVOOutputControl);
break;
case ENCODER_OBJECT_ID_INTERNAL_LVDS:
index = GetIndexIntoMasterTable(COMMAND, LCD1OutputControl);
break;
@ -1082,34 +1221,85 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) {
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
if (connector &&
(connector->connector_type == DRM_MODE_CONNECTOR_eDP)) {
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_connector_atom_dig *radeon_dig_connector =
radeon_connector->con_priv;
atombios_set_edp_panel_power(connector,
ATOM_TRANSMITTER_ACTION_POWER_ON);
radeon_dig_connector->edp_on = true;
}
dp_link_train(encoder, connector);
if (ASIC_IS_DCE4(rdev))
atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON);
}
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0);
break;
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0);
if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) {
struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);
if (ASIC_IS_DCE4(rdev))
atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF);
if (connector &&
(connector->connector_type == DRM_MODE_CONNECTOR_eDP)) {
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_connector_atom_dig *radeon_dig_connector =
radeon_connector->con_priv;
atombios_set_edp_panel_power(connector,
ATOM_TRANSMITTER_ACTION_POWER_OFF);
radeon_dig_connector->edp_on = false;
}
}
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_LCD_BLOFF, 0, 0);
break;
}
} else {
switch (mode) {
case DRM_MODE_DPMS_ON:
args.ucAction = ATOM_ENABLE;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
args.ucAction = ATOM_LCD_BLON;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
break;
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
args.ucAction = ATOM_DISABLE;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
args.ucAction = ATOM_LCD_BLOFF;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
break;
}
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
if (ext_encoder) {
int action;
switch (mode) {
case DRM_MODE_DPMS_ON:
default:
action = ATOM_ENABLE;
break;
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
action = ATOM_DISABLE;
break;
}
atombios_external_encoder_setup(encoder, ext_encoder, action);
}
radeon_atombios_encoder_dpms_scratch_regs(encoder, (mode == DRM_MODE_DPMS_ON) ? true : false);
}
@ -1242,7 +1432,7 @@ atombios_set_encoder_crtc_source(struct drm_encoder *encoder)
break;
default:
DRM_ERROR("Unknown table version: %d, %d\n", frev, crev);
break;
return;
}
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
@ -1357,6 +1547,7 @@ radeon_atom_encoder_mode_set(struct drm_encoder *encoder,
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct drm_encoder *ext_encoder = radeon_atom_get_external_encoder(encoder);
radeon_encoder->pixel_clock = adjusted_mode->clock;
@ -1400,11 +1591,9 @@ radeon_atom_encoder_mode_set(struct drm_encoder *encoder,
}
break;
case ENCODER_OBJECT_ID_INTERNAL_DDI:
atombios_ddia_setup(encoder, ATOM_ENABLE);
break;
case ENCODER_OBJECT_ID_INTERNAL_DVO1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
atombios_external_tmds_setup(encoder, ATOM_ENABLE);
atombios_dvo_setup(encoder, ATOM_ENABLE);
break;
case ENCODER_OBJECT_ID_INTERNAL_DAC1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
@ -1419,6 +1608,11 @@ radeon_atom_encoder_mode_set(struct drm_encoder *encoder,
}
break;
}
if (ext_encoder) {
atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE);
}
atombios_apply_encoder_quirks(encoder, adjusted_mode);
if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) {
@ -1595,11 +1789,9 @@ static void radeon_atom_encoder_disable(struct drm_encoder *encoder)
}
break;
case ENCODER_OBJECT_ID_INTERNAL_DDI:
atombios_ddia_setup(encoder, ATOM_DISABLE);
break;
case ENCODER_OBJECT_ID_INTERNAL_DVO1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
atombios_external_tmds_setup(encoder, ATOM_DISABLE);
atombios_dvo_setup(encoder, ATOM_DISABLE);
break;
case ENCODER_OBJECT_ID_INTERNAL_DAC1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
@ -1621,6 +1813,53 @@ static void radeon_atom_encoder_disable(struct drm_encoder *encoder)
radeon_encoder->active_device = 0;
}
/* these are handled by the primary encoders */
static void radeon_atom_ext_prepare(struct drm_encoder *encoder)
{
}
static void radeon_atom_ext_commit(struct drm_encoder *encoder)
{
}
static void
radeon_atom_ext_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
}
static void radeon_atom_ext_disable(struct drm_encoder *encoder)
{
}
static void
radeon_atom_ext_dpms(struct drm_encoder *encoder, int mode)
{
}
static bool radeon_atom_ext_mode_fixup(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
return true;
}
static const struct drm_encoder_helper_funcs radeon_atom_ext_helper_funcs = {
.dpms = radeon_atom_ext_dpms,
.mode_fixup = radeon_atom_ext_mode_fixup,
.prepare = radeon_atom_ext_prepare,
.mode_set = radeon_atom_ext_mode_set,
.commit = radeon_atom_ext_commit,
.disable = radeon_atom_ext_disable,
/* no detect for TMDS/LVDS yet */
};
static const struct drm_encoder_helper_funcs radeon_atom_dig_helper_funcs = {
.dpms = radeon_atom_encoder_dpms,
.mode_fixup = radeon_atom_mode_fixup,
@ -1730,6 +1969,7 @@ radeon_add_atom_encoder(struct drm_device *dev, uint32_t encoder_enum, uint32_t
radeon_encoder->devices = supported_device;
radeon_encoder->rmx_type = RMX_OFF;
radeon_encoder->underscan_type = UNDERSCAN_OFF;
radeon_encoder->is_ext_encoder = false;
switch (radeon_encoder->encoder_id) {
case ENCODER_OBJECT_ID_INTERNAL_LVDS:
@ -1771,6 +2011,9 @@ radeon_add_atom_encoder(struct drm_device *dev, uint32_t encoder_enum, uint32_t
radeon_encoder->rmx_type = RMX_FULL;
drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS);
radeon_encoder->enc_priv = radeon_atombios_get_lvds_info(radeon_encoder);
} else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC);
radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder);
} else {
drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS);
radeon_encoder->enc_priv = radeon_atombios_set_dig_info(radeon_encoder);
@ -1779,5 +2022,22 @@ radeon_add_atom_encoder(struct drm_device *dev, uint32_t encoder_enum, uint32_t
}
drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs);
break;
case ENCODER_OBJECT_ID_SI170B:
case ENCODER_OBJECT_ID_CH7303:
case ENCODER_OBJECT_ID_EXTERNAL_SDVOA:
case ENCODER_OBJECT_ID_EXTERNAL_SDVOB:
case ENCODER_OBJECT_ID_TITFP513:
case ENCODER_OBJECT_ID_VT1623:
case ENCODER_OBJECT_ID_HDMI_SI1930:
/* these are handled by the primary encoders */
radeon_encoder->is_ext_encoder = true;
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_LVDS);
else if (radeon_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_DAC);
else
drm_encoder_init(dev, encoder, &radeon_atom_enc_funcs, DRM_MODE_ENCODER_TMDS);
drm_encoder_helper_add(encoder, &radeon_atom_ext_helper_funcs);
break;
}
}

View file

@ -79,8 +79,8 @@ int radeon_gart_table_vram_alloc(struct radeon_device *rdev)
if (rdev->gart.table.vram.robj == NULL) {
r = radeon_bo_create(rdev, NULL, rdev->gart.table_size,
true, RADEON_GEM_DOMAIN_VRAM,
&rdev->gart.table.vram.robj);
PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
&rdev->gart.table.vram.robj);
if (r) {
return r;
}

View file

@ -67,7 +67,7 @@ int radeon_gem_object_create(struct radeon_device *rdev, int size,
if (alignment < PAGE_SIZE) {
alignment = PAGE_SIZE;
}
r = radeon_bo_create(rdev, gobj, size, kernel, initial_domain, &robj);
r = radeon_bo_create(rdev, gobj, size, alignment, kernel, initial_domain, &robj);
if (r) {
if (r != -ERESTARTSYS)
DRM_ERROR("Failed to allocate GEM object (%d, %d, %u, %d)\n",

View file

@ -896,7 +896,8 @@ struct radeon_i2c_chan *radeon_i2c_create(struct drm_device *dev,
((rdev->family <= CHIP_RS480) ||
((rdev->family >= CHIP_RV515) && (rdev->family <= CHIP_R580))))) {
/* set the radeon hw i2c adapter */
sprintf(i2c->adapter.name, "Radeon i2c hw bus %s", name);
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name),
"Radeon i2c hw bus %s", name);
i2c->adapter.algo = &radeon_i2c_algo;
ret = i2c_add_adapter(&i2c->adapter);
if (ret) {
@ -905,7 +906,8 @@ struct radeon_i2c_chan *radeon_i2c_create(struct drm_device *dev,
}
} else {
/* set the radeon bit adapter */
sprintf(i2c->adapter.name, "Radeon i2c bit bus %s", name);
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name),
"Radeon i2c bit bus %s", name);
i2c->adapter.algo_data = &i2c->algo.bit;
i2c->algo.bit.pre_xfer = pre_xfer;
i2c->algo.bit.post_xfer = post_xfer;
@ -946,6 +948,8 @@ struct radeon_i2c_chan *radeon_i2c_create_dp(struct drm_device *dev,
i2c->rec = *rec;
i2c->adapter.owner = THIS_MODULE;
i2c->dev = dev;
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name),
"Radeon aux bus %s", name);
i2c_set_adapdata(&i2c->adapter, i2c);
i2c->adapter.algo_data = &i2c->algo.dp;
i2c->algo.dp.aux_ch = radeon_dp_i2c_aux_ch;

View file

@ -76,7 +76,7 @@ int radeon_enable_vblank(struct drm_device *dev, int crtc)
default:
DRM_ERROR("tried to enable vblank on non-existent crtc %d\n",
crtc);
return EINVAL;
return -EINVAL;
}
} else {
switch (crtc) {
@ -89,7 +89,7 @@ int radeon_enable_vblank(struct drm_device *dev, int crtc)
default:
DRM_ERROR("tried to enable vblank on non-existent crtc %d\n",
crtc);
return EINVAL;
return -EINVAL;
}
}

View file

@ -670,7 +670,7 @@ static void radeon_legacy_tmds_ext_mode_set(struct drm_encoder *encoder,
if (rdev->is_atom_bios) {
radeon_encoder->pixel_clock = adjusted_mode->clock;
atombios_external_tmds_setup(encoder, ATOM_ENABLE);
atombios_dvo_setup(encoder, ATOM_ENABLE);
fp2_gen_cntl = RREG32(RADEON_FP2_GEN_CNTL);
} else {
fp2_gen_cntl = RREG32(RADEON_FP2_GEN_CNTL);

View file

@ -375,6 +375,7 @@ struct radeon_encoder {
int hdmi_config_offset;
int hdmi_audio_workaround;
int hdmi_buffer_status;
bool is_ext_encoder;
};
struct radeon_connector_atom_dig {
@ -385,6 +386,7 @@ struct radeon_connector_atom_dig {
u8 dp_sink_type;
int dp_clock;
int dp_lane_count;
bool edp_on;
};
struct radeon_gpio_rec {
@ -523,9 +525,10 @@ struct drm_encoder *radeon_encoder_legacy_primary_dac_add(struct drm_device *dev
struct drm_encoder *radeon_encoder_legacy_tv_dac_add(struct drm_device *dev, int bios_index, int with_tv);
struct drm_encoder *radeon_encoder_legacy_tmds_int_add(struct drm_device *dev, int bios_index);
struct drm_encoder *radeon_encoder_legacy_tmds_ext_add(struct drm_device *dev, int bios_index);
extern void atombios_external_tmds_setup(struct drm_encoder *encoder, int action);
extern void atombios_dvo_setup(struct drm_encoder *encoder, int action);
extern void atombios_digital_setup(struct drm_encoder *encoder, int action);
extern int atombios_get_encoder_mode(struct drm_encoder *encoder);
extern void atombios_set_edp_panel_power(struct drm_connector *connector, int action);
extern void radeon_encoder_set_active_device(struct drm_encoder *encoder);
extern void radeon_crtc_load_lut(struct drm_crtc *crtc);

View file

@ -86,11 +86,12 @@ void radeon_ttm_placement_from_domain(struct radeon_bo *rbo, u32 domain)
}
int radeon_bo_create(struct radeon_device *rdev, struct drm_gem_object *gobj,
unsigned long size, bool kernel, u32 domain,
struct radeon_bo **bo_ptr)
unsigned long size, int byte_align, bool kernel, u32 domain,
struct radeon_bo **bo_ptr)
{
struct radeon_bo *bo;
enum ttm_bo_type type;
int page_align = roundup(byte_align, PAGE_SIZE) >> PAGE_SHIFT;
int r;
if (unlikely(rdev->mman.bdev.dev_mapping == NULL)) {
@ -115,7 +116,7 @@ int radeon_bo_create(struct radeon_device *rdev, struct drm_gem_object *gobj,
/* Kernel allocation are uninterruptible */
mutex_lock(&rdev->vram_mutex);
r = ttm_bo_init(&rdev->mman.bdev, &bo->tbo, size, type,
&bo->placement, 0, 0, !kernel, NULL, size,
&bo->placement, page_align, 0, !kernel, NULL, size,
&radeon_ttm_bo_destroy);
mutex_unlock(&rdev->vram_mutex);
if (unlikely(r != 0)) {

View file

@ -137,9 +137,10 @@ static inline int radeon_bo_wait(struct radeon_bo *bo, u32 *mem_type,
}
extern int radeon_bo_create(struct radeon_device *rdev,
struct drm_gem_object *gobj, unsigned long size,
bool kernel, u32 domain,
struct radeon_bo **bo_ptr);
struct drm_gem_object *gobj, unsigned long size,
int byte_align,
bool kernel, u32 domain,
struct radeon_bo **bo_ptr);
extern int radeon_bo_kmap(struct radeon_bo *bo, void **ptr);
extern void radeon_bo_kunmap(struct radeon_bo *bo);
extern void radeon_bo_unref(struct radeon_bo **bo);

View file

@ -176,8 +176,8 @@ int radeon_ib_pool_init(struct radeon_device *rdev)
INIT_LIST_HEAD(&rdev->ib_pool.bogus_ib);
/* Allocate 1M object buffer */
r = radeon_bo_create(rdev, NULL, RADEON_IB_POOL_SIZE*64*1024,
true, RADEON_GEM_DOMAIN_GTT,
&rdev->ib_pool.robj);
PAGE_SIZE, true, RADEON_GEM_DOMAIN_GTT,
&rdev->ib_pool.robj);
if (r) {
DRM_ERROR("radeon: failed to ib pool (%d).\n", r);
return r;
@ -332,7 +332,7 @@ int radeon_ring_init(struct radeon_device *rdev, unsigned ring_size)
rdev->cp.ring_size = ring_size;
/* Allocate ring buffer */
if (rdev->cp.ring_obj == NULL) {
r = radeon_bo_create(rdev, NULL, rdev->cp.ring_size, true,
r = radeon_bo_create(rdev, NULL, rdev->cp.ring_size, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_GTT,
&rdev->cp.ring_obj);
if (r) {

View file

@ -52,7 +52,7 @@ void radeon_test_moves(struct radeon_device *rdev)
goto out_cleanup;
}
r = radeon_bo_create(rdev, NULL, size, true, RADEON_GEM_DOMAIN_VRAM,
r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
&vram_obj);
if (r) {
DRM_ERROR("Failed to create VRAM object\n");
@ -71,7 +71,7 @@ void radeon_test_moves(struct radeon_device *rdev)
void **gtt_start, **gtt_end;
void **vram_start, **vram_end;
r = radeon_bo_create(rdev, NULL, size, true,
r = radeon_bo_create(rdev, NULL, size, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_GTT, gtt_obj + i);
if (r) {
DRM_ERROR("Failed to create GTT object %d\n", i);

View file

@ -529,7 +529,7 @@ int radeon_ttm_init(struct radeon_device *rdev)
DRM_ERROR("Failed initializing VRAM heap.\n");
return r;
}
r = radeon_bo_create(rdev, NULL, 256 * 1024, true,
r = radeon_bo_create(rdev, NULL, 256 * 1024, PAGE_SIZE, true,
RADEON_GEM_DOMAIN_VRAM,
&rdev->stollen_vga_memory);
if (r) {

View file

@ -915,8 +915,8 @@ static int rv770_vram_scratch_init(struct radeon_device *rdev)
if (rdev->vram_scratch.robj == NULL) {
r = radeon_bo_create(rdev, NULL, RADEON_GPU_PAGE_SIZE,
true, RADEON_GEM_DOMAIN_VRAM,
&rdev->vram_scratch.robj);
PAGE_SIZE, true, RADEON_GEM_DOMAIN_VRAM,
&rdev->vram_scratch.robj);
if (r) {
return r;
}

View file

@ -224,6 +224,9 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo,
int ret;
while (unlikely(atomic_cmpxchg(&bo->reserved, 0, 1) != 0)) {
/**
* Deadlock avoidance for multi-bo reserving.
*/
if (use_sequence && bo->seq_valid &&
(sequence - bo->val_seq < (1 << 31))) {
return -EAGAIN;
@ -241,6 +244,14 @@ int ttm_bo_reserve_locked(struct ttm_buffer_object *bo,
}
if (use_sequence) {
/**
* Wake up waiters that may need to recheck for deadlock,
* if we decreased the sequence number.
*/
if (unlikely((bo->val_seq - sequence < (1 << 31))
|| !bo->seq_valid))
wake_up_all(&bo->event_queue);
bo->val_seq = sequence;
bo->seq_valid = true;
} else {

View file

@ -862,7 +862,7 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data,
&vmw_vram_sys_placement, true,
&vmw_user_dmabuf_destroy);
if (unlikely(ret != 0))
return ret;
goto out_no_dmabuf;
tmp = ttm_bo_reference(&vmw_user_bo->dma.base);
ret = ttm_base_object_init(vmw_fpriv(file_priv)->tfile,
@ -870,19 +870,21 @@ int vmw_dmabuf_alloc_ioctl(struct drm_device *dev, void *data,
false,
ttm_buffer_type,
&vmw_user_dmabuf_release, NULL);
if (unlikely(ret != 0)) {
ttm_bo_unref(&tmp);
} else {
if (unlikely(ret != 0))
goto out_no_base_object;
else {
rep->handle = vmw_user_bo->base.hash.key;
rep->map_handle = vmw_user_bo->dma.base.addr_space_offset;
rep->cur_gmr_id = vmw_user_bo->base.hash.key;
rep->cur_gmr_offset = 0;
}
ttm_bo_unref(&tmp);
out_no_base_object:
ttm_bo_unref(&tmp);
out_no_dmabuf:
ttm_read_unlock(&vmaster->lock);
return 0;
return ret;
}
int vmw_dmabuf_unref_ioctl(struct drm_device *dev, void *data,

View file

@ -752,7 +752,7 @@ static int input_default_setkeycode(struct input_dev *dev,
if (index >= dev->keycodemax)
return -EINVAL;
if (dev->keycodesize < sizeof(dev->keycode) &&
if (dev->keycodesize < sizeof(ke->keycode) &&
(ke->keycode >> (dev->keycodesize * 8)))
return -EINVAL;

View file

@ -1097,7 +1097,7 @@ store_tabletPointerMode(struct device *dev, struct device_attribute *attr, const
}
static DEVICE_ATTR(pointer_mode,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletPointerMode, store_tabletPointerMode);
/***********************************************************************
@ -1134,7 +1134,7 @@ store_tabletCoordinateMode(struct device *dev, struct device_attribute *attr, co
}
static DEVICE_ATTR(coordinate_mode,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletCoordinateMode, store_tabletCoordinateMode);
/***********************************************************************
@ -1176,7 +1176,7 @@ store_tabletToolMode(struct device *dev, struct device_attribute *attr, const ch
}
static DEVICE_ATTR(tool_mode,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletToolMode, store_tabletToolMode);
/***********************************************************************
@ -1219,7 +1219,7 @@ store_tabletXtilt(struct device *dev, struct device_attribute *attr, const char
}
static DEVICE_ATTR(xtilt,
S_IRUGO | S_IWUGO, show_tabletXtilt, store_tabletXtilt);
S_IRUGO | S_IWUSR, show_tabletXtilt, store_tabletXtilt);
/***********************************************************************
* support routines for the 'ytilt' file. Note that this file
@ -1261,7 +1261,7 @@ store_tabletYtilt(struct device *dev, struct device_attribute *attr, const char
}
static DEVICE_ATTR(ytilt,
S_IRUGO | S_IWUGO, show_tabletYtilt, store_tabletYtilt);
S_IRUGO | S_IWUSR, show_tabletYtilt, store_tabletYtilt);
/***********************************************************************
* support routines for the 'jitter' file. Note that this file
@ -1288,7 +1288,7 @@ store_tabletJitterDelay(struct device *dev, struct device_attribute *attr, const
}
static DEVICE_ATTR(jitter,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletJitterDelay, store_tabletJitterDelay);
/***********************************************************************
@ -1317,7 +1317,7 @@ store_tabletProgrammableDelay(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR(delay,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletProgrammableDelay, store_tabletProgrammableDelay);
/***********************************************************************
@ -1406,7 +1406,7 @@ store_tabletStylusUpper(struct device *dev, struct device_attribute *attr, const
}
static DEVICE_ATTR(stylus_upper,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletStylusUpper, store_tabletStylusUpper);
/***********************************************************************
@ -1437,7 +1437,7 @@ store_tabletStylusLower(struct device *dev, struct device_attribute *attr, const
}
static DEVICE_ATTR(stylus_lower,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletStylusLower, store_tabletStylusLower);
/***********************************************************************
@ -1475,7 +1475,7 @@ store_tabletMouseLeft(struct device *dev, struct device_attribute *attr, const c
}
static DEVICE_ATTR(mouse_left,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletMouseLeft, store_tabletMouseLeft);
/***********************************************************************
@ -1505,7 +1505,7 @@ store_tabletMouseMiddle(struct device *dev, struct device_attribute *attr, const
}
static DEVICE_ATTR(mouse_middle,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletMouseMiddle, store_tabletMouseMiddle);
/***********************************************************************
@ -1535,7 +1535,7 @@ store_tabletMouseRight(struct device *dev, struct device_attribute *attr, const
}
static DEVICE_ATTR(mouse_right,
S_IRUGO | S_IWUGO,
S_IRUGO | S_IWUSR,
show_tabletMouseRight, store_tabletMouseRight);
/***********************************************************************
@ -1567,7 +1567,7 @@ store_tabletWheel(struct device *dev, struct device_attribute *attr, const char
}
static DEVICE_ATTR(wheel,
S_IRUGO | S_IWUGO, show_tabletWheel, store_tabletWheel);
S_IRUGO | S_IWUSR, show_tabletWheel, store_tabletWheel);
/***********************************************************************
* support routines for the 'execute' file. Note that this file
@ -1600,7 +1600,7 @@ store_tabletExecute(struct device *dev, struct device_attribute *attr, const cha
}
static DEVICE_ATTR(execute,
S_IRUGO | S_IWUGO, show_tabletExecute, store_tabletExecute);
S_IRUGO | S_IWUSR, show_tabletExecute, store_tabletExecute);
/***********************************************************************
* support routines for the 'odm_code' file. Note that this file

View file

@ -699,7 +699,8 @@ DEFINE_WINDOW_IO(32)
#define DEVICE_PCI(dev) NULL
#endif
#define VORTEX_PCI(vp) (((vp)->gendev) ? DEVICE_PCI((vp)->gendev) : NULL)
#define VORTEX_PCI(vp) \
((struct pci_dev *) (((vp)->gendev) ? DEVICE_PCI((vp)->gendev) : NULL))
#ifdef CONFIG_EISA
#define DEVICE_EISA(dev) (((dev)->bus == &eisa_bus_type) ? to_eisa_device((dev)) : NULL)
@ -707,7 +708,8 @@ DEFINE_WINDOW_IO(32)
#define DEVICE_EISA(dev) NULL
#endif
#define VORTEX_EISA(vp) (((vp)->gendev) ? DEVICE_EISA((vp)->gendev) : NULL)
#define VORTEX_EISA(vp) \
((struct eisa_device *) (((vp)->gendev) ? DEVICE_EISA((vp)->gendev) : NULL))
/* The action to take with a media selection timer tick.
Note that we deviate from the 3Com order by checking 10base2 before AUI.

View file

@ -490,13 +490,11 @@ static inline unsigned int cp_rx_csum_ok (u32 status)
{
unsigned int protocol = (status >> 16) & 0x3;
if (likely((protocol == RxProtoTCP) && (!(status & TCPFail))))
if (((protocol == RxProtoTCP) && !(status & TCPFail)) ||
((protocol == RxProtoUDP) && !(status & UDPFail)))
return 1;
else if ((protocol == RxProtoUDP) && (!(status & UDPFail)))
return 1;
else if ((protocol == RxProtoIP) && (!(status & IPFail)))
return 1;
return 0;
else
return 0;
}
static int cp_rx_poll(struct napi_struct *napi, int budget)

View file

@ -82,7 +82,7 @@ static int atl1c_get_permanent_address(struct atl1c_hw *hw)
addr[0] = addr[1] = 0;
AT_READ_REG(hw, REG_OTP_CTRL, &otp_ctrl_data);
if (atl1c_check_eeprom_exist(hw)) {
if (hw->nic_type == athr_l1c || hw->nic_type == athr_l2c_b) {
if (hw->nic_type == athr_l1c || hw->nic_type == athr_l2c) {
/* Enable OTP CLK */
if (!(otp_ctrl_data & OTP_CTRL_CLK_EN)) {
otp_ctrl_data |= OTP_CTRL_CLK_EN;

Some files were not shown because too many files have changed in this diff Show more